Search This Blog

CCE in brief

My photo
Recovering backpacker, Cornwallite at heart, political enthusiast, catalyst, writer, husband, father, community volunteer, unabashedly proud Canadian. Every hyperlink connects to something related directly or thematically to that which is highlighted.

Friday 22 March 2013

The Psychology of Superman

Our deepest fear is not that we are inadequate - our deepest fear is that we are powerful beyond measure.

Not he - we.

Abrams or Synder, I'm really excited to see what they've come up with.


Free Market Theory and Creative Destruction (James Kwak)

Or something like that.

How Supposed Free-Market Theorists Destroyed Free-Market Theory

This guest post was contributed by Dan Geldon, a fellow at the Roosevelt Institute. He is a former counsel at the Congressional Oversight Panel and a graduate of Harvard Law School.
Over the past year, there has been much discussion about how the financial crisis exposed weaknesses in free-market theory. What has attracted less discussion is the extent to which the high priests of free-market theory themselves destroyed meaningful contracts and other bedrocks of functioning markets and, in the process, created the conditions for the theory’s weaknesses to emerge.
The story begins before Wall Street’s capture of Washington in the 1980s and 1990s and the deregulatory push that began around the same time. In many ways, it started in 1944.
In that year, Frederich von Hayek published The Road to Serfdom, putting forward many of the ideas behind the pro-market, anti-regulatory economic view that swept through America and the rest of the world in the decades that followed. Von Hayek’s basic argument was that freedom to contract and to conduct business without government meddling allowed for free choice, allocated resources efficiently, facilitated economic growth, and made us all a little richer. Milton Friedman built on Hayek, creating an ideology that resonated with conservatives and ultimately became the prevailing economic view in Washington.
While many have noted how information asymmetry, moral hazard, and agency costs reveal glaring holes in free-market theory and contributed to the current crisis, few have focused on the extent to which the supposed heirs to von Hayek and Friedman directly and purposefully created market distortions and, in the process, destroyed the assumptions of free-market theory.
In other words, the same interests that claim the mantle of von Hayek and Friedman pulled the threads from the free-market system and exposed the theory’s greatest weaknesses.
In the years leading up to the crisis, the proliferation of fine print, complex products, and hidden costs and dangers – and the push against government regulations over them – exemplified the larger pattern. While touting complexity as a form of innovation and railing against every attempt at government interference, supposedly pro-market forces used that complexity to clog the gears of free market machinery and to reduce competition and maximize profit.
When consumer credit contracts are buried in so much legalese that even experts can’t understand all the terms –­ I heard one former CEO of a top financial company admit privately that his lawyers couldn’t explain various mortgage terms and conditions — how can anyone believe the mortgage contract represents meaningful free choice? What consumer is able to weigh the benefits and costs of individual financial product features buried in the fine print and decide what to take and what to leave?
The corporate assault on comprehensible contracts is important because contract law has been the bedrock of capitalism for a long as there has been capitalism. By enabling free choice, meaningful contracts maximize economic efficiency. The assumption behind von Hayek and other theorists is that robust contract law facilitates a vibrant economic system and minimizes the need for government intervention in the economy. But that went out the window when von Hayek’s theory itself was used to manipulate contracts. Now that products and fine print have become so perverted and incomprehensible, how can anyone expect contracts to steer the market in economically efficient ways?
We now know that the problem of complex contracts did not just harm consumers. Municipalities across the country were lured into buying toxic derivatives and institutional investors were routinely abused at the hands of complex products. Stories about Wall Street’s math wizards purposefully cramming dangerous and confusing products down the throats of the unsuspecting are commonplace and legendary.
The world has changed in fundamental ways thanks to computers and complexity can have value, but the world as we now know it has made traditional economic assumptions that assume real choice and real contracts irrelevant. All that’s left is the hollow façade of choice when your broker shows you where to sign or when you click “accept” after quickly scrolling through incoherent legalese. And we all are forced to accept this even though we know that the large majority of these products –­ and the actual deals around them — just aren’t that complicated. The only thing that’s complicated are the fine print and the economically valueless tricks and traps hidden in the legalese.
Some conservatives are quick to blame the fine print on litigation and trial lawyers. But that just doesn’t explain all the complexity that has come to define Wall Street. Talk to a CEO of a major credit card issuer privately, and they will admit that “stealth pricing” was purposefully innovated to maximize profit by making contracts difficult to understand and compare. The proliferation of opacity and the lack of competition in the industry are not an accident.
The industry has not only manipulated contract language to prevent real agreement (or what contract lawyers call “meetings of the mind”), but it also massively increased its negotiating leverage with counterparties by making it so onerous to walk away from boilerplate and incomprehensible terms and conditions. It’s not easy to negotiate with the other side of a 1-800 number, nor is it easy to go toe-to-toe with an industry that can and does get away with tricking and trapping even supposedly sophisticated investors.
And if you think all that were enough, many of the same conservative economists and lobbyists have fought tooth and nail behind the scenes to preserve implicit government guarantees created by the bailouts – guarantees that allow large banks to access capital more cheaply than the smaller banks left struggling to compete. While touting pro-market values and railing against “big government” attempts to break up the big banks, they are directly and purposefully allowing for market distortions. And those distortions help explain the massive consolidation we’re seeing in the industry, the dwindling of real competition, and the proliferation of faceless conglomerates with infinite leverage over the drafting of terms and conditions.
What’s really galling though isn’t that supposed free-market advocates are so hell-bent on distorting the market wherever necessary to inflate profit. What’s worse is the extent to which the same interests successfully advocated the rules that allowed this to happen under the well-worn guise of–you guessed it–freedom to contract and freedom to choose. That is, through their well-financed and well-oiled lobbyist teams, they facilitated the destruction of the freedom to contract and free choice while pretending to do the opposite. They killed the free market in the name of saving it.
The greatest lesson from the crisis that we haven’t yet learned is that “industry interests” and “free-market interests” are not the same. In fact, they are more like oil and water, as the industry profits most in the absence of true market competition. And so it should be no surprise that Wall Street has devoted itself to making contracts indecipherable, building boundless negotiating leverage and fighting for favorable breaks and regulation at every turn. What should be a surprise is that the same scoundrels that killed our markets (and also, mind you, wrecked the global economy and demanded taxpayer bailouts) have so ably sold themselves as natural heirs to von Hayek ­and Friedman — and that so many of us have let them.
By Dan geldon

Rick Mercer cribs Stephen Harper

Andrew Coyne's Reading List

Andrew Coyne's a pretty smart fella - so when he talks with measured confidence about entrepreneurialism, labour health/productivity and innovation, I'm sure he's done his homework and therefore knows of what he types.
Some of the material he may have perused in his research:
A Mentally Healthy Workforce - It's Good For Business (Partnership for Workplace Mental Health)
Age, Education and Illness Behaviour (Singapore Medical Journal)
Does Job Satisfaction Improve the Health of Workers? (Institute for the Study of Labor)
Guarding Minds at Work: A Workplace Guide to Occupational Health and Safety (Centre for Applied Research in Mental Health and Addiction)
The Consequences of Micromanaging (Kenneth E. Fracaro)
The Unheralded Business Crisis in Canada (Global Business and Economic Roundtable on Addiction and Mental Health)
School of Hard Knocks (Annie Murphy Paul Reviewing Paul Tough 'How Children Succeed')
The Hypomanic Edge (John D. Gartner)
There are 25 articles here; many are lengthy and pretty much all of them will strain your brain. 
I could offer some recommendations on priority reading - I could even organize them into sub-groups and colour coordinate them.  But again, Coyne's a smart guy, an independent thinker and pretty entrepreneurial himself. 
Folk like that don't need guidance, do they?  They always rise to the top and end up calling the shots, which is why we're where we are today.

Applying What We Know: Student Learning Styles (Dennis W. Mills)

Applying What We Know
Student Learning Styles
Research tells us that we now have 100% new information every five years. If that trend continues, students who are in grades one through three will graduate during a time where, in some technological fields, there will be new information every 38 days. That could mean that the information they learned this month may be outdated two months from now!
David Kearns, former CEO of the Xerox Corporation, defines "uneducated" as "not knowing how to keep on learning."
That is telling us that as teachers we need to help our students learn how to be life-long learners. If students haven’t learned how to learn, they may not be able to be effectively trained in a career that they choose.
You and I receive new information every day that we live. Understanding how we naturally take in and process that information will go a long way toward making us life-long learners. Helping our students understand how they naturally take in and process information will go a long way toward making them life-long learners.
We know that people are not all alike. We each see the world in a way that makes the most sense to each of us as individuals. This is called perception. Our perceptions shape what we think, how we make decisions, and how we define what’s important. Our individual perception also determines our natural learning strengths, or learning style.
Since we are not basically alike, when we approach a learning task or situation, we do not all benefit from the same approach. Each individual has his or her own unique learning strengths and weaknesses. It is vital for us a teachers to deliberately use a variety of methods to reach the students.
There are many approaches to individual learning styles. One of the most effective models for use in learning comes from the research of Anthong F. Gregorc and Kathleen A. Butler. The Gregorc model provides and organized way to consider how the mind works.
There are two perceptual qualities: concrete and abstract.
Concrete: This quality enables you to register information directly through your five senses: sight, smell, touch, taste, and hearing. When you are using your concrete ability, you are dealing with the obvious, the "here and now." You are not looking for hidden meanings, or making relationships between ideas or concepts. "It is what it is."
Abstract: this quality allows you to visualize, to conceive ideas, to understand or believe that which you cannot actually see. When you are using your abstract quality, you are using your intuition, your imagination, and you are looking beyond what is to the more subtle implications. "It is not always what it seems."
Although all people have both concrete and abstract perceptual abilities to some extent, each person is usually comfortable using one more than the other. The person whose natural strength in the concrete, for example, may communicate in a direct, literal, no-nonsense manner. The person whose natural strength is the abstract may use more subtle ways to get a point across.
There are two ordering abilities in Gregorc’s model:
Sequential: Allows your mind to organize information in a linear, step-by-step manner. When using your sequential ability, you are following a logical train of though, a traditional approach to dealing with information. You may also prefer to have a plan and to follow it, rather than relying on impulse.
Random: Lets your mind organize information by chunks, and in no particular order. When you are using your random ability, you may often be able to skip steps in a procedure and still produce the desired result. You may even start in the middle, or at the end, and work backwards. You may also prefer your life to be more impulsive, or spur of the moment, than planned.
Again, both ordering abilities are present in each person, but usually a pattern emerges for using one over the other more comfortably.
There are four combinations of the strongest perceptual and ordering ability in each individual:
No one is a "pure" style. Each of us have a unique combination of natural strengths and abilities. By learning some of the common characteristics of each of the four combinations used by Gregorc, we can recognize and value what our students do best. We can help them to improve in areas that are least used and understood.
It is my hope that by understanding your students’ learning styles, you will be better able to adapt your teaching styles and strategies to meet their needs. It is not as important to figure out what a person is as it is to recognize how and why a person is doing something.

The links below will help you gain additional resources on learning styles.

What is your personal learning style? The Center for New Discoveries in Learning in Windsor, CA has an online personal learning style quiz you can take. The first secret to making learning easier and faster is understanding your personal learning style. The second secret is to know the most efficient learning style of the task you have chosen to learn. When these two styles match, you will have virtually effortless learning and recall. In order to find out the learning style or styles you prefer, just take their short inventory of 36 questions.

Learning Styles Algonquin College of Applied Arts and Technology have a brief, but good overview of learning styles. Their information reflects the research of David Kolb.

How Your Learning Style Affects Your Use of Mnemonics The way in which people learn affects the sort of mnemonics they should consider using to store information. Mind Tools, Inc. sponsors this site.


Copyright © 2002 Dennis W. Mills, Ph.D.
This publication may be reprinted in any format, but notification and credit is appreciated.

Christian Web Hosting provided by

Thursday 21 March 2013

The Better Angels of Our Nature by Steven Pinker – review (Tim Radford)


The Better Angels of Our Nature by Steven Pinker – review

The decline of violence, 'may be the most significant and least appreciated development in the history of our species'
Tim Radford
  • Steven Pinker argues that the 'better angels of our nature' are in the ascendancy at a talk hosted by Bristol University. Video: Newton Channel Link to video: Violence in retreat: Steven Pinker reveals the better angels of our nature
    Nobody could accuse Steven Pinker of intellectual constipation. He tackles open-ended, pub argument themes such as where language came from, how the mind is formed, and how we got to be what we are: all questions obscured by the fog of prehistory, bedevilled by subjective attitudes and overwhelmed by evidence as confusing as it is profuse.
    Four things characterise a Pinker book. The attack is humane but headlong; the background reading is prodigious and pertinent; the evidence is marshalled with vigour and rigour; and the writing is laced with a casual, populist wit. The reader knows where Pinker is coming from, and Pinker never has any doubt about where he is going. The effect is exhilarating, even if it isn't always convincing.
    The Better Angels of Our Nature takes a thesis I would love to believe; indeed, have casually believed for most of my life. It is that humans have grown less horrible with time. The 20th century, the century of Hitler, Stalin and Pol Pot, of Mao in China and Mobutu in the Congo, was appalling, but the number of deaths by violence as a proportion of the total population remained modest compared with the ferocious cruelties of the wars of religion in the 17th century.
    The modern nation state – the Leviathan of the philosopher Hobbes – has had a civilising effect almost everywhere. Education has helped, as has the empowerment of women, and the idea, too, of human rights.
    Within the epic sweep of history from ice age hunter gatherers to modern suburban householders, Pinker examines both the big picture and the fine detail, with surprises on every page. The Wild West and Gold Rush California really were wild, with high homicide rates and names to match: Cut-Throat Gulch, Hangtown, Helltown, Whiskeytown and Gomorrah "though, interestingly, no Sodom."
    Overall, however, he finds examples of falling murder rates everywhere (including among male English aristocrats 1330-1829). Murder rates as a percentage of population were far higher among the supposedly peace-loving and cooperative hunter-gatherer communities – the Inuit of the Arctic, for instance, the !Kung of the Kalahari and the Semai of Malaysia – than in the trigger-happy US in its most violent decade.
    Unexpectedly, deaths in warfare, once again as a percentage of total population, were far higher among the Gran Valley Dani of New Guinea, or in Fiji in the 1860s, than in Germany in the whole of the 20th century. The state, however brutally, civilised its citizens and persuaded them to surrender the satisfactions of vengeance to impartial law.
    Somehow, citizens also civilised the state. Torture and public execution by torture were once instruments of power and popular mass entertainment: now torture exists only in secret, and hides behind political euphemism.
    Capital and corporal punishment have been eliminated in much of the world, and slavery has been abolished: people have lost their thirst for cruelty. Pinker gives the credit for this progress – "and if it isn't progress, I don't know what is" – to explicit political arguments and changes in sensibilities that began during the 18th century, the Age of Reason and the Enlightenment.
    Instances for his argument come from everywhere: menacing declarations posted in Gold Rush claims, dialogue from The Godfather Part II, the Hebrew Bible, Homer and so on. Young men commit most killings – this is a constant through history – but are civilised by marriage (an observation that he concedes is, in the words of Oscar Hammerstein II, "as corny as Kansas in August.")
    He ducks nothing: the cold war, the so-called war against terror ("it is a little-known fact that most terrorist groups fail, and that all of them die"), rape, infanticide, aggression, lynch mobs, ethnic cleansing, vendetta, psychopathology, genocide, sadism, cruelty to animals and murderous ideologies. The decline of violence, he says, "may be the most significant and least appreciated development in the history of our species".
    The close grain of the argument, the liveliness of the writing, the sheer mass and density of the evidence he produces (and I'll be honest, I still haven't finished all of its 800 pages) never quite still the reader's unease. What is it about us that has really changed? How good are all these statistics? Are Moses and Homer reliable guides to bloodshed in the Bronze Age? If violence is a corollary of ignorance and fear, who really believes those things have gone away? But of course, Pinker sees that one coming and confronts it, too.
    What he has delivered is yet another absorbing slice from history's prodigious provender: he calls upon cognitive science, anthropology, behavioural science, criminology, sociology, statistics, game theory and any number of appropriate fields of scholarship to support his argument in the later chapters. But in its confidence and sweep, the vast timescale, its humane standpoint and its confident world-view, it is something more than a science book: it is an epic history by an optimist who can list his reasons to be cheerful and support them with persuasive instances.
    I don't know if he's right, but I do think this book is a winner.
    Tim Radford's The Address Book: Our Place in the Scheme of Things (Fourth Estate) was longlisted for the Royal Society Winton Prize for Science Books

    The Return of Charm to Canadian Politics

    Kathleen Wynne is as sincere, open and straight-forward as they come. It's an approach that requires courage, patience and endurance.  It's also one that impresses - and succeeds.

    Alison Redford has clued in to the value of the conversation, too - she's actively wooing the press and stakeholders by being more human, more accessible, more forthright.

    Stephen Harper, however, remains the exact opposite - aloof, contemptuous of opponents and uninterested in chit-chat with the press, the provinces or anyone he can't control, for that matter.

    Given the rise of the pro-social politician, can Harper maintain the distant disdain? Or will Steven Sweatervest be making a return?

    Also worth noting - there's a subtle shift happening in politics away from the obstructionist, target-a-few, pick-fights-with-the-rest mentality towards, again, something more conversational and collaborative. 

    Interesting, is all.

    Inside Stephen Harper and His Legacy

    I really enjoyed this Paul Wells column for two reasons:

    1) It takes a scratch at Stephen Harper's veneer and provides a rather astute look at what lies beneath.

    Political people like to spend a lot of time exploring tactics and strategy and trying to guess what moves their opponents will make next so that they can counter them in advance.  It's a bit of playing general, hashing out battle plans and moving troops and ordnance around the field.  The best players at this game, including Harper, get compared to Niccolò Machiavelli. 

    Of course, Machiavelli didn't live in a democratic state - nor was he all that successful in his career, either.

    What too many of these War Roomers fail to do is get into the headspace of their opponents; because lives aren't literally on the line and campaign periods tend to be short, punchy and frequent, the assumption is that level of depth isn't necessary.  If you get in fast, hit 'em hard and never leave a punch unreturned, you're golden.  It's not as effective an approach as they think.

    Stephen Harper is clever, yes, but he's hardly evil.  When you label him as inscrutable, you are willfully ignoring the ample evidence mapping out the clockwork that makes him tick.

    Harper is a bit like Theon Greyjoy - trying to be something he's not and suffering from the resulting cognitive dissonance.  He's an emotional guy who isn't gifted with natural outlets to express his insecurities.  Harper self-identifies as a threatened outsider - this stems from his family history, his relationship with his immediate relatives and of course his being an Easterner trying to fit in out West.  It's not at all hard to believe he hid in a washroom in Brazil - that would fit in lock-step with other displays of petulance ranging from embracing the same China he criticized when Obama rejected him over Keystone or a negative obsession with the Liberal Party of Canada and in particular, the Trudeau legacy.

    There's a big gap between how Harper wants to be perceived, how his Party wants to portray him and how the PM really feels about himself.

    Publicly, he's a proud Westerner (who needs guidance on how to wear cowboy clothes) and contemptuous of the latte-sipping elites considered stereotypical of his hometown, Toronto.  In the House, he wears the emperor's cloak of a tough scrapper, like a Jean Chretien or a Brian Mulroney.  Yet where both of those leaders loved engaging with the public and weren't afraid to give and take hits, Harper has an obsession with fire walls, trouble lapping at shores - all defensive postures that he has probably developed and relied on since at least his university days.  In fact, he responds a bit like a jilted lover when partners don't do things his way.  He's scripted, brief in his appearances and does his best not to engage directly (and certainly not publicly) with equals who may challenge him - like Premiers and press galleries, for instance.

    The Opposition is supposed to fear Harper's bold political strokes of genius - but where are they?  Harper has chipped away at some Liberal programming he doesn't like and muzzled people who threaten him (because public debate with people who know more about subject matter than he does is way too intimidating).  He's even managed to raise the ire (or at least amusement) of international players.  For the most part, though, Canada hasn't changed much at all - Harper is, at best, an incrementalist who plays it safe.  He may tell himself he's slowly building a legacy that will last, but the truth is he's not comfortable with being bold.  In fact, that little smile of his betrays a level of nervousness that he tries to mask with overt derision and condescension.  

    Stephen Harper is not comfortable with variety.  He wants to be the smartest man in the room, but is worried he isn't; that's why he sticks to his personal safety zone in terms of subject matter and looks to shoot down, rather than engage, those verbal opponents he can't avoid.

    Liberal Leadership hopefuls should carefully study the diplomatic approach of Kathleen Wynne.  Could you imagine the PM trying to square off against her in a public forum?  She would judo-flip every statement he'd make and throw him off with her supportive, almost parental approach.  He'd work himself into a tizzy every day, just thinking about it.  He's comfortable with attacks, because he can deflect them - offer him the political equivalent of a hug and see what happens.

    Of course, the goal of Opposition shouldn't be to destroy Harper, but that's something to address in

    2) Harper is no further ahead at realizing his dream of stamping out Canadian progressiveness than he was when he started.

    Politics isn't war - the goal isn't to end opponents' careers or take away their castles.  It's a far less romantic and far more diplomatic exercise of winning over the hearts and minds of voters.  Politics in a democratic state is more like religion than combat - conversion, not slaughter, is the desired pathway to the ideological state equivalent of the Kingdom of Heaven.

    Harper doesn't get this - for all his structural tweaks and opposition attacks, he's had zero luck in making Canada "more conservative."  In fact, there's a broad shift towards more liberal approaches to problems - collaboration, dialogue, shared service, playing nice.  Even Conservatives are starting to embrace this wynning approach.  Perhaps worse - Harper's pokes and prods have wakened a frequently somnambulant populace; the harder he pushes, the less idle they become, helping to maintain a positive balance between staying rooted and moving forward.

    If you take a look at the broad trends in Canadian history, the periodic pendulum swings between the political left and political right aren't as significant as they seem in the heat of the moment, or even the decade.  Social progress, after all, is the equivalent of natural evolution - you need to adapt to survive.

    Over time, our society is becoming more inclusive, more diverse, better coordinated and continuously progressing.  Canada has never been at the top of the power or popularity scale, but then we have never slid, either.  Twenty years from now, we'll have a Conservative PM fighting to remake the country in a Conservative vision that will be weak tea compared to that Harper presents now.  The Progressives of that day will reel in horror and express their fears for the future of their country, much as they're doing today.  The progressive trend will continue regardless. 

    Stephen Harper has worked hard, against the odds, to get to where he is - that's a level of dedication that deserves to be respected.  He's inevitably going to fall short of the legacy he wants, which is really kind of sad.  The best thing the Opposition can do is help him to progress past his internal conflict and maybe empower the Prime Minister to relieve some of that cognitive dissonance.

    After all, there's nice symmetry to a life that goes full-circle.

    Normal behaviour, or mental illness? (Anne Kingston, Macleans)

    Normal behaviour, or mental illness?

    A look at the new psychiatric guidelines that are pitting doctors against doctors

    by Anne Kingston on Tuesday, March 19, 2013 8:30am

    Is she a brat, or is she sick?
    Jonathan Kirn/Getty Images
    Every parent of a preteen has been there: on the receiving end of sullen responses, bursts of frustration or anger, even public tantrums that summon the fear that Children’s Aid is on its way. Come late May, with the publication of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), however, such sustained cranky behaviour could put your child at risk of a diagnosis of “disruptive mood dysregulation disorder.” This newly minted condition will afflict children between 6 and 12 who exhibit persistent irritability and “frequent” outbursts, defined as three or more times a week for more than a year. Its original name, “temper dysregulation disorder with dysphoria,” was nixed after it garnered criticism it pathologized “temper tantrums,” a normal childhood occurrence. Others argue that even with the name change the new definition and diagnosis could do just that.
    “Disruptive mood dysregulation disorder” isn’t the only new condition under scrutiny in the reference manual owned and produced by the American Psychiatric Association (APA)—and lauded as psychiatry’s bible. Even though the final version of DSM-5 remains under embargo, its message is being decried in some quarters as blasphemous. Its various public drafts, the third published last year, have stoked international outrage—and a flurry of op-ed columns, studies, blogs and petitions. In October 2011, for instance, the Society for Humanistic Psychology drafted an open letter to the DSM task force that morphed into an online petition signed by more than 14,000 mental health professionals and 50 organizations, including the American Counseling Association and the British Psychology Society.
    Of fundamental concern is a loosening and broadening of categories to the point that everyone potentially stands on the brink of some mental-disorder diagnosis, or sits on some spectrum—a phenomenon the American psychologist Frank Farley has called “the sickening of society.” One change summoning criticism is DSM-5’s reframing of grief, that inescapable fact of life, by removing the “bereavement exclusion” for people who’ve experienced loss. Previously, anyone despairing the death of a loved one wasn’t considered a candidate for “major depression” unless their despondency persisted for more than two months or was accompanied by severe functional impairment, thoughts of suicide or psychotic symptoms. No longer.

    Other updates to DSM-5, the first full revision in nearly two decades, have raised red flags. Forgetting where you put your keys or other memory lapses, a fact of aging formerly shrugged off as “a senior moment,” could portend “minor neurocognitive disorder,” a shift destined to also stoke anxiety. Anyone who overeats once a week for three weeks could have a “binge-eating disorder.” Women not turned on sexually by their partners or particularly interested in sex are candidates for “female sexual interest/arousal disorder.” Nail-biters join the ranks of the obsessive-compulsive, alongside those with other “pathological grooming habits” such as “hair-pulling” and “skin-picking.”
    The fuzzy boundary between “generalized anxiety disorder” (GAD) and everyday worries has also been blurred. As Allan V. Horowitz, a sociology professor at Rutgers University, points out, changes in this category are potentially the most important because they affect the largest number of people. Under the new “somatic symptom disorder” (SSD), for instance, people who express any anxiety about physical symptoms could also be saddled with a mental illness diagnosis, which could thwart their attempts to have their physical issues taken seriously. To meet the definition one only needs to report a single bodily symptom that’s distressing and/or disruptive to daily life and have just one of the following three reactions for at least six months: “ ‘disproportionate’ thoughts about the seriousness of their symptom(s); a high level of anxiety about their health; devoting excessive time and energy to symptoms or health concerns.”
    DSM-5 represents a step back in mental health care, says psychologist Peter Kinderman, head of the Institute of Psychology, Health and Society at the University of Liverpool. Kinderman, who is organizing an international letter of objection to DSM-5 to be posted on, which launches March 20, believes many new DSM classifications, among them “female orgasmic disorder,” defy common sense. “If you’re not enjoying sex, it’s a problem, but it’s crazy to say it’s a mental illness,” he says. He also questions the new criteria for alcohol and drug “substance-use disorders.” “According to it, 40 to 50 per cent of college students should be considered mentally ill.” Such diagnoses interfere with the human helping response, says Kinderman. “When women get raped, it’s traumatic; when soldiers go to war, they come back emotionally affected. We don’t need the new label, ‘post-traumatic stress disorder,’ ” he says. “We should identify risk, identify problems, identify the threats people have and then we need to help them.”
    DSM-5’s most vocal critic is psychiatrist Allen Frances, who chaired the DSM-IV task force. Frances, professor emeritus at Duke University, calls the approval of the DSM-5 in December 2012 “the saddest moment in my 45-year career of studying, practising and teaching psychiatry.” In an interview with Maclean’s, he slammed the DSM-5’s methodology as lacking rigour and being “scientifically unsound.” Frances cautions clinicians, media and the general public to “be skeptical” and not to “follow DSM-5 blindly down a road likely to lead to massive over-diagnosis and harmful overmedication.” His concern is two-pronged: healthy people will be over-treated; undue focus on them will mean people who need psychiatric help won’t get it. He expects that “somatic symptom disorder” will greatly increase the rates of diagnosis of mental disorders in the medically ill—whether they have established diseases like diabetes or cancer or unexplained symptoms. “Anyone with the slightest bit of common sense knows this is stupid,” he says, adding that people in the DSM world don’t get it. “They have remarkable blinders to common sense.”
    People in the DSM world disagree. “We sought to be conservative in our approach to revising DSM-5,” DSM-5 task force chair David Kupfer wrote in an email to Maclean’s. “Our work was aimed at more accurately defining mental disorders that have a real impact on people’s lives, not at expanding the scope of psychiatry or increasing the number of individuals diagnosed.” Kupfer says response from the psychiatric community is “largely supportive.” But he welcomes criticism: “It’s an inherent part of any robust scientific discussion,” he says. That’s good, because this discussion—one that delves into what it is to be human— is just beginning. Classification of mental illness in the U.S. dates to 1917, when a committee on statistics, a precursor organization to the American Psychiatric Association, teamed with the National Commission on Mental Hygiene to develop the Statistical Manual for the Use of Institutions for the Insane. It boasted 22 diagnoses. The first DSM was published in 1952 and has been updated to reflect new research in genetics, epidemiology, risk and imaging. The 886-page DSM-IV, published in 1994, lists 297 “disorders.” DSM-5 clocks in at 1,000 pages in its $199 hardcover version and includes approximately the same number of diagnoses as DSM-IV, says Kupfer: “This goes against the trend in other areas of medicine, which typically increase the number of diagnoses.”
    DSM is not the only accepted measure in classifying the signs and symptoms of mental disorders. The World Health Organization’s “International Classification of Diseases” (ICD), a diagnostic tool used in epidemiology, health management and clinical research, also provides metrics. But DSM is the benchmark driving mental illness treatment and research, and the reference for insurance companies. “The DSM is a big deal,” says Kinderman. “Even though it’s an American document, it influences research across the world.” Jose Silveira, chief of psychiatry at St. Joseph’s Health Centre in Toronto, says the DSM is integrated into the Canadian system: “We don’t sit with patients saying, ‘Does the DSM say this or not?’ But we use it because insurance companies request it; the government requests it; it’s used in disability claims; it’s used for tracking rates in the population.”
    Its utility is in organizing symptoms only, Silveira says: “It’s purely diagnostic; it doesn’t reflect risks associated with conditions.” He believes DSM is not particularly useful for front-line primary care providers—the GPs, psychologists, social workers and family therapists who provide an estimated 80 per cent of mental health services. “Diagnosis can take a trained clinician hours,” he says.
    Yet as witnessed with the explosion in use of Ritalin and antipsychotics after DSM-IV identified ADHD and bipolar disorder as bona fide conditions, a new disease diagnosis influences whether millions of patients are placed on drugs—often by primary care doctors with minimal training in psychiatric diagnosis. And this puts children particularly at risk, says psychologist Brent Robbins, president-elect of the Society for Humanistic Psychology and co-editor of Drugging Our Children. He cites one U.S. study that found 72 per cent of pediatricians prescribe psychotropics to children, though only eight per cent say they feel adequately trained to do so.
    One group the DSM unequivocally has helped is psychiatrists themselves. The DSM-III, published in 1973, resuscitated the specialty at a time it was facing irrelevancy, says Frances, who contributed to that edition. “Studies showed a lack of agreement between psychiatrists in the U.S; it seemed as if they didn’t know what they were doing.” DSM-III was “a radical step forward in providing diagnostic criteria that had people working off the same page and, under ideal conditions, could result in agreement about diagnosis,” he says.
    Its success had a downside. “It became too important in external ways—particularly with drug-company muscle pushing diagnosis to push pills,” the psychiatrist says. “It led to a diagnostic inflation and expanded the boundaries of psychiatry beyond its competence. As a result, it diverted attention from the core effort: taking care of people with real psychiatric illness.”
    Frances’s experience on DSM-IV, which came under fire as well, taught him how new diagnoses can spread like wildfire. His task force believed they were being conservative, he says, in vetoing all but three of 94 suggestions for new disease diagnoses: Asperger’s, ADHD and bipolar II disorder. “We expected a three to four fold rise in Asperger’s diagnosis: we never dreamed it could go from less than one in 2,000 to one in 88 in the U.S. and one in 38 in Korea,” he says. Likewise, the diagnosis of childhood bipolar disorder increased fortyfold. “Anything that can be used in DSM will be misused,” he says.
    While drug companies are not directly involved in the DSM process, “they’re on the sidelines licking their chops for sure,” Frances says. He’s quick to add that the DSM-5 task force adopted a better system of vetting for financial conflicts of interest than his did. Still 70 per cent of DSM-5 authors have declared ties to pharmaceutical manufacturers; in some categories, it’s 100 per cent. But Frances, author of Saving Normal: An Insider’s Revolt Against Out-of-Control Psychiatric Diagnosis, DSM-5, Big Pharma, and the Medicalization of Ordinary Life, sees “intellectual conflict of interest” as an even greater problem. “I know people involved and they are making absolutely dreadful suggestions that will be of enormous use to the pharmaceutical industry and they’re doing it with the purest of hearts,” he says. “Any time you give experts pure freedom they will expand the system to reflect their own interests. No one says, ‘My area has too much emphasis on it; we should be restricting diagnosis to fewer people.’ They worry about the missed patient. And they always overvalue their own research, their friends’ research.”
    The DSM-5 process itself is a case study in human co-operation, conflict and dysfunction. More than a decade in the making, and involving 13 work groups and with more than 1,500 contributors from 39 countries, it has been riddled with revisions, delays and high drama. One goal was to better align the DSM to the World Health Organization’s ICD codes. Another was to remedy some of the unforeseen—and unfortunate—consequences of DSM-IV. “Disruptive mood dysregulation disorder,” for example, was intended to address concerns about potential over-diagnosis and over-treatment of bipolar disorder in children. Pediatric psychiatrist Terry Bennett, a professor at McMaster University, sees the change as constructive: “It’s a nice move away from labelling children with bipolar disorder; it doesn’t make claims to predicting that these kids will have bipolar when they grow up and could be helpful in minimizing undue medication.” She believes if the criteria are applied carefully, the diagnosis should capture only a very small group of kids who are severely impaired.
    The elimination of the “bereaved exclusion,” another contentious topic, reflects new research, says Ron Pies, a clinical professor of psychiatry at Boston’s Tufts University who has studied the subject. Most bereaved people do not meet the full criteria for major depressive disorder and don’t need professional treatment, Pies says: “They need ‘TLC’ and what doctors throughout the ages have called ‘tincture of time.’ The relatively small subgroup of recently bereaved persons who meet full DSM-5 criteria for major depression disorder will now be able to receive appropriate professional care.” Pies is emphatic the reclassification “does not mean instant antidepressant prescription, no matter how often critics insist this will be the case.” The risk of overlooking a potentially lethal illness, with a four per cent completed suicide rate, is much greater than “over-calling” an episode of ordinary grief, he says.
    At the outset, DSM-5 had an ambitious plan to reimagine diagnosis from the ground up, says Frances (a claim supported by others but refuted by Kupfer): “They wanted a paradigm shift in psychiatry.” But descriptive psychiatric diagnosis can’t support that, Frances says: “There can be no dramatic improvements in psychiatric diagnosis until we make a fundamental leap in our understanding of what causes mental disorders.”
    As a result, observers report, the task force focused on a preventive-style of diagnosis that targeted milder conditions with higher rates in the community. Nipping potential problems in the bud is a long-standing medical mandate. But without cause or treatment, it can be problematic, says drug policy researcher Alan Cassels of the University of Victoria. Cassels sees DSM-5 continuing the medical trend of “pre-disease” diagnosis, citing “minor neurocognitive disorder” as an example: “What better way to get perfectly healthy people to start shuffling down the cattle ramp toward a good jolt of the yet-to-be-launched pre-dementia medicines that the drug industry will soon be zapping us with?”
    Task force chair Kupfer, whose own career has been devoted to mood disorders, expresses confidence in DSM-5: “By utilizing the best experts and research, we have produced a manual that best represents the current science and will be most useful to clinicians and the patients they serve.”
    Not all who participated in the process agree. Last year, Roel Verheul, a professor at the University of Amsterdam, and John Livesley, professor emeritus at the University of British Columbia, resigned from the DSM-5 Personality and Personality Disorders Work Group in protest. In a public email they explained they “considered the current proposal to be fundamentally flawed” with a “truly stunning disregard for evidence.” They called the proposed classifications “unnecessarily complex, incoherent, and inconsistent” and stated “the obvious complexity and incoherence seriously interfere with clinical utility.” They concluded: “The DSM 5 personality section is not readable, much less usable. It will be ignored by clinicians and will do grave harm to research.”

    Lack of outside scrutiny has been a problem, says Robbins. The APA countered calls for an independent review claiming that “no outside organization has the capacity to replicate the range of expertise that DSM-5 has assembled over the past decade to review diagnostic criteria for mental disorders.” Robbins calls that arrogant and ludicrous: “There are hundreds of mental health organizations across the world that would gladly offer their services to review the DSM-5.”
    How and why a society defines mental illness is a mutable cultural barometer reflecting current thinking, biases—and assigning stigma. DSM-I, for instance, listed homosexuality as a “sociopath personality disturbance”; DSM-II reclassified it as “sexual deviancy.” It was removed from DSM-III entirely amid political mobilization and protest. Psychiatric diagnoses have a history of reflecting cultural prejudices, says Silveira, who points to “drapetomania,” a purported mental illness described by American physician Samuel A. Cartwright in 1851. The condition, said to have afflicted black slaves, was characterized by a propensity to try to flee captivity. According to Cartwright, it could be almost entirely prevented by proper medical advice, strictly followed.
    Mental illness diagnoses also frame mental health. The DSM illustrates how fluid those definitions can be: in DSM-IV, Asperger’s was given stand-alone status; DSM-5 returned it to “autism spectrum disorder.” Now, the reality-TV staple “hoarding” will become an official disorder, while “anxious depression,” “hypersexual disorder” and “parental alienation syndrome” failed to make the cut. The reference book has also introduced a slippery slope with the addition of a new “behavioural addictions” category, which currently only includes “pathological gambling”—though “Internet-use gaming disorder” and “caffeine-use disorder” are listed in section three (disorders needing further research).
    Yet for all of the controversy, the DSM is not mandatory for mental health professionals, but rather one tool, says Frances. In fact, the American Psychological Association is gearing up to encourage training in the ICD, says Robbins. “There’s a sense among the higher-ups that the DSM system is a sinking ship. It’s losing its canonical status.” He sees one upside from the controversy: an opportunity to begin constructive conversations about the future of psychiatric diagnosis. He’s spearheading an international summit on diagnostic alternatives about to launch an open online public conversation. Silveira agrees: “Is there a lot of controversy around diagnosis? Absolutely. Should there be?
    Absolutely. That is its greatest virtue. It provides opportunity for input from lay people, social workers, and brings us all to the table. We’re dealing with conditions where there is a profound degree of complexity and a profound degree of uncertainty.” He points out medicine is far from the pinnacle of understanding the human body: “We’re still embryonic.” Our understanding of the human brain is even more primitive. “The brain doesn’t reveal its secrets easily,” says Frances. “It’s the most complicated thing in the universe.” And there is no more compelling evidence of that than the DSM-5’s new definitions of what it is to be a healthy human—and the very human backlash it has received.

    Wednesday 20 March 2013

    Coping styles in animals: current status in behavior and stress-physiology (J.M. Koolhaas)

    Coping styles in animals: current status in behavior and stress-physiology

    J.M. Koolhaas

    a,*, S.M. Korteb, S.F. De Boera, B.J. Van Der Vegta, C.G. Van Reenenb,

    H. Hopster

    b, I.C. De Jonga,b, M.A.W. Ruisb, H.J. Blokhuisb


    Department of Animal Physiology, University of Groningen, P.O. Box 14, 9750 AA Haren, The Netherlands


    DLO-Institute for Animal Science and Health (ID-DLO), Department of Behavior, Stress Physiology and Management, P.O. Box 65, 8200 AB Lelystad,

    The Netherlands

    Received 1 May 1999


    This paper summarizes the current views on coping styles as a useful concept in understanding individual adaptive capacity and

    vulnerability to stress-related disease. Studies in feral populations indicate the existence of a proactive and a reactive coping style. These

    coping styles seem to play a role in the population ecology of the species. Despite domestication, genetic selection and inbreeding, the same

    coping styles can, to some extent, also be observed in laboratory and farm animals. Coping styles are characterized by consistent behavioral

    and neuroendocrine characteristics, some of which seem to be causally linked to each other. Evidence is accumulating that the two coping

    styles might explain a differential vulnerability to stress mediated disease due to the differential adaptive value of the two coping styles and

    the accompanying neuroendocrine differentiation.

    q1999 Elsevier Science Ltd. All rights reserved.


    Coping; Aggression; Stress; Disease; Corticosterone

    1. Introduction

    Psychosocial factors have long been recognized as important

    in health and disease both in man and in animals. It is

    not the physical characteristics of a certain aversive stimulus

    but rather the cognitive appraisal of that stimulus, which

    determines its aversive character and whether a state

    commonly described as stress is induced. The impact of

    aversive stimuli or stressors is determined by the ability of

    the organism to cope with the situation [1,2]. Several definitions

    of coping can be given [3]. In the present paper, we

    prefer to use the term coping as the behavioral and physiological

    efforts to master the situation [3,4]. Successful

    coping depends highly on the controllability and predictability

    of the stressor [5,6]. A consistent finding across

    species is that whenever environmental stressors are too

    demanding and the individual cannot cope, its health is in

    danger. For this reason, it is important to understand the

    mechanisms and factors underlying the individual’s capacity

    to cope with environmental challenges. A wide variety

    of medical, psychological and animal studies demonstrate

    that individuals may differ in their coping capacities.

    Factors that have been shown to affect the individual’s

    coping capacity include genotype, development, early

    experience, social support, etc. Since many studies in

    humans indicate that coping mechanisms are important in

    health and disease [7], researchers have tried for a long time

    to determine the individual vulnerability to stress-related

    diseases using estimates of the individual coping capacity.

    One approach concerns attempts to classify coping

    responses into distinct coping styles. A coping style can

    be defined as a coherent set of behavioral and physiological

    stress responses which is consistent over time and which is

    characteristic to a certain group of individuals. It seems that

    coping styles have been shaped by evolution and form

    general adaptive response patterns in reaction to everyday

    challenges in the natural habitat. The concept of coping

    styles has been used in a wide variety of animal species

    (see Table 1). Despite the widespread use of the concept,

    it is not without debate [8]. This is due to several flaws in the

    studies using the concept. First, not all studies fulfill the

    criterion of coping style as a coherent set of behavioral

    and physiological characteristics because only one parameter

    has been studied. Second, the extent to which clearly

    distinct coping styles can be distinguished has been questioned

    [8,9]. Special attention will be given here to the

    frequency distribution of coping styles in a population, the

    consistency over time and the one-dimensional character of

    the concept of coping styles. Finally, one may wonder to


    Neuroscience and Biobehavioral Reviews 23 (1999) 925–935




    NBR 376

    0149-7634/99/$ - see front matter

    q 1999 Elsevier Science Ltd. All rights reserved.

    PII: S0149-7634(99)00026-3

    * Corresponding author. Tel.:

    131-50-3632338; fax: 131-50-3632331.

    E-mail address: (J.M. Koolhaas)

    what extent the concept of coping styles is really related to

    the individual vulnerability to stress-mediated disease.

    This review will discuss these major issues and it will be

    argued that the clustering of various behavioral characteristics

    may to some extent be causally related to differences

    in (re)activity of the neuroendocrine system.

    2. Behavioral characteristics of coping styles

    Much of our current thinking on coping styles is derived

    from the work of Jim Henry [10]. He suggested, on the basis

    of social stress research in animals and man, that two stress

    response patterns may be distinguished. The first type, the

    active response, was originally described by Cannon [11] as

    the fight-flight response. Behaviorally, the active response is

    characterized by territorial control and aggression. Engel

    and Schmale [12] originally described the second type of

    stress response as the conservation-withdrawal response.

    This response pattern is characterized behaviorally by

    immobility and low levels of aggression.

    These ideas led to the hypothesis that the individual level

    of aggressive behavior, i.e. the tendency to defend the home

    territory, is related to the way individual males react to

    environmental challenges in general. The hypothesis was

    tested by Benus [13] using male house mice that were

    genetically selected for either short attack latency (SAL)

    or long attack latency (LAL). Also when other indices of

    aggressive behavior are taken into account, the SAL males

    are considered extremely aggressive whereas the LAL

    males have very low levels of intermale aggressive behavior

    [14]. The results of a series of experiments not only in mice,

    but also in rats, suggest the existence of at least two coping

    styles, which are summarized in Table 2. We prefer to use

    the terms proactive coping rather than active coping and

    reactive rather than passive coping (see further). Several

    conclusions can be drawn from these results. First, the individual

    level of aggressive behavior is indeed related to the

    way in which the animals react to a wide variety of environmental

    challenges. Second, it seems that aggressive males

    have a more proactive type of behavioral response, whereas

    non-aggressive or reactive males seem to be more adaptive

    and flexible, responding only when absolutely necessary.

    An important fundamental question is whether the two

    types of behavior patterns can be considered to represent

    styles of coping in the sense that they are both aimed at

    successful environmental control [15]. Several experiments

    indicate that the different behavior patterns can indeed be

    considered as coping styles aimed at environmental control.

    This is, for example, shown in a recent experiment using

    wild-type rats. This strain of rats shows a large individual

    variation in aggressive behavior similar to the variation in

    wild house mice. After being tested for their tendency to

    defend the home cage against an unfamiliar male conspecific,

    the males were tested in a shock prod defensive burying

    test. In this test, the animal is confronted with a small,


    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935

    Table 1

    Overview of the species in which a strong individual differentiation has been observed that may reflect coping styles. The plus signs give a rough indication of

    the number of parameters on which the individual differentiation is based, i.e.

    1 indicates a single parameter study, 1 1 indicates a multi-parameter study

    Species Behavioral parameters Physiological parameters Reference

    Mouse (

    Mus musculus



    1 1 1 1


    Rat (

    Rattus norvegicus) 1 1 1 1 [77]

    Pig (

    Sus scrofa) 1 1 1 1 [20,37]

    Tree shrew (

    Tupaja belangeri) 1 1 1 1 [78]

    Cattle (

    Bos taurus) 1 11 [23]

    Great tit (

    Parus major) 1 1 [19]

    Chicken (

    Gallus domesticus) 1 11 [79]

    Beech marten (

    Martes foina) 1 [80]

    Stickleback (




    1 1


    Rainbow trout (




    1 1


    Rhesus monkey (




    1 1 1 1


    Human (

    Homo sapiens) 1 1 1 1 [26]

    Octopus (

    Octopus rubescens) 1 1 [25]

    Table 2

    Summary of the behavioral differences between proactive and reactive

    male rats and mice

    Behavioral characteristics

    Proactive Reactive References

    Attack latency Low High [14]

    Active avoidance High Low [70,83]

    Defensive burying High Low [84], this paper

    Nest-building High Low [85]

    Routine formation High Low [16]

    Cue dependency Low High [17,84]

    Conditioned immobility Low High [17]

    Flexibility Low High [77]

    electrified prod in its home cage. Because this prod is a

    novel object, the experimental animal will explore it by

    sniffing at the object. Consequently, the animal receives a

    mild but aversive shock. As soon as it has experienced the

    shock, the animal has two options to avoid further shocks. It

    may either hide in a corner of the cage to avoid further

    contact with the shock prod, or it may actively bury the

    shock prod with the bedding material of the cage. Under

    these free choice conditions, aggressive males spend most

    of the test-time (10 min) burying (Fig. 1) while non-aggressive

    males show immobility behavior. Notice, however, that

    the two types of responding are equally successful in avoiding

    further shocks. In this particular test, successful coping

    can be defined operationally as avoidance of further shocks.

    The terms active and passive coping are frequently used to

    indicate the differences between the two styles. However,

    these terms may lead to some confusion, because the terms

    do not properly describe the fundamental differences. A

    very fundamental difference seems to be the degree in

    which behavior is guided by environmental stimuli

    [16,17]. Aggressive males easily develop routines, i.e. a

    rather intrinsically driven rigid type of behavior. Nonaggressive

    males in contrast are more flexible and react to

    environmental stimuli all the time. For that reason, we

    prefer to use the terms proactive coping and reactive coping.

    This differential degree of flexibility may explain why

    aggressive males are more successful under stable colony

    conditions, whereas non-aggressive males do better in a

    variable or unpredictable environment, for example during

    migration [18].

    It is important to emphasize that the differentiation in

    coping styles may not be expressed equally clearly in all

    challenging situations. In particular, tests that measure

    aspects of initiative or proactivity seem to be most discriminative.

    This holds, for example, for latency measures

    such as the attack latency test in males or the defensive

    burying test, which allow the animal a choice between

    proactive and reactive coping. Although female mice

    usually do not show territorial aggression, females of the

    short attack latency selection line show much more defensive

    burying than female mice of the long attack latency

    selection line. This supports our view that aggression is

    only one of a larger set of behavioral characteristics that

    make up the proactive coping style.

    3. Distinct coping styles: distribution and consistency

    over time

    The concept of coping style and the way it is generally

    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935


    Fig. 1. Correlation

    .R . 0:72. between attack latency score in rats as

    measured in the resident intruder paradigm and the percentage of time

    spent burying in the defensive burying test of 10 min duration.

    Fig. 2. Frequency distribution of attack latency scores (seconds) obtained in the 5th to the 12th generation of laboratory bred wild-type male rats

    .N . 2500. in

    different age classes (postnatal days).

    presented in the literature suggests that there are distinct

    phenotypes, which are more or less stable over time in

    their response to stressors. The early studies by Oortmerssen

    and colleagues, on a feral population of house mice suggest

    a bimodal distribution of male phenotypes as measured by

    the individual latency to attack a standard intruder into the

    home cage [18]. The idea of bimodal distributions has been

    strengthened by the fact that the phenotypical differences

    appeared to have a rather strong genetic component. Genetic

    selection for either of the extremes of the variation in a

    certain behavioral or physiological characteristic generally

    results in distinct genotypes within a few generations. Many

    studies on coping styles and individual vulnerability to

    stress mediated disease are based on the use of such genetic

    selection lines. Selection lines have rather stable characteristics,

    which are relatively insensitive to environmental

    influences. However, there is confusion in the literature on

    this issue, because several investigators using unselected

    strains of animals were unable to find clearly distinct and

    stable coping styles. The main problem seems to be the large

    diversity in origin, age and gender of the experimental

    animals involved. In the few studies that consider feral

    populations, distinct phenotypes are found. Both in wild

    house mice and in a small bird, the great tit (



    ), latency measures seem to have a bimodal distribution

    [18,19]. However, one has to realize that the distribution

    is not truly bimodal because latency measures are

    generally finite, i.e. above a certain time the latency is set

    to the maximum value. This leads to an accumulation of

    individual scores in the distribution curve at this maximum

    value. Nevertheless, the individual behavioral scores in wild

    populations are certainly not normally distributed. An

    analysis of a large database of aggressive behavior of a

    population of 2500 laboratory-bred adult male wild rats

    reveals an age-dependent change in the distribution of

    attack latencies. Above a certain age, three peaks emerge

    in the frequency distribution, with a clear intermediate

    group (see Fig. 2). This difference with the wild situation

    may be explained by the fact that there is little or no selection

    pressure in the laboratory. It is tempting to consider the

    possibility that intermediate animals are less successful in

    nature. Few studies address the survival value of distinct

    proactive and reactive coping styles. However, recent, yet

    unpublished studies in feral populations of birds indicate

    that the fitness of different coping styles depends on the

    stability of the environment in terms of social structure

    and food availability.

    Many studies use laboratory strains of animals or heavily

    domesticated farm animals like pigs. Usually, individual

    behavioral scores are normally distributed in these studies.

    For example, several studies in pigs show that the distribution

    of individual scores in the back-test is normally distributed

    [20,21]. Moreover, it is hard to tell how a certain inbred

    or domesticated strain relates to the original and presumably

    functional distribution of its wild ancestors. However, it is

    intriguing that the extremes of this normal distribution still

    fulfill the criteria for proactive and reactive coping styles,

    both behaviorally and physiologically [20]. Although the

    discussion on the shape of the distribution curve is important

    from an evolutionary point of view, it does not seem to

    matter much when individual vulnerability to stress-related

    diseases is concerned. Afterall, it has been repeatedly shown

    that the extremes in a population, irrespective of the detailed

    distribution curve, may differ not only quantitatively, but

    also qualitatively in their behavioral and physiological

    response pattern to stress (see Table 1). Evidence has been

    found in different species that the behavioral and physiological

    response of individual animals to a specific stressor is

    consistent over time. In pigs, for example, individual gilts

    that displayed relatively long latency times to contact a

    novel object and spent relatively little time near the object

    during their first exposure showed a similar response when

    re-tested one week later [22]. Also in dairy cows, consistency

    was measured in behavior, in heart rate and in plasma

    cortisol concentrations when individual animals were

    repeatedly tested in a novel environment test over one

    week. Moreover, consistent stress responses to the same

    test were also found for cardiac and adrenocortical

    responses over one year [23].

    Another important issue concerns the one-dimensional

    character of the concept of coping styles. Several studies

    have used a factor analytical approach to reduce the sources

    of individual variation in a population to a limited number

    of components [24–26]. This statistical approach usually

    reveals two or three factors that explain a considerable

    part of the individual variation. Although some of these

    factors relate to trait characteristics or aspects of personality

    similar to coping styles, others may relate to state variables

    such as stress and fear. These studies, both in humans and in

    animals, emphasize the multidimensional character of individual

    (personality) traits. However, aspects of aggression

    such as hostility, impulsivity, anger or proactivity are often

    found as an important dimension. In an experimental study

    in Roman High and Roman Low avoidance lines of rats,

    Steimer et al. [27] includes the dimension of emotional

    reactivity as a second trait characteristic. By correlating

    behavior of individual animals in a number of coping

    style and emotion/anxiety related tests, he found evidence

    for two independent dimensions, i.e. coping style and

    emotional reactivity. Individual behavioral profiles calculated

    either on indices of emotional reactivity or on indices

    of exploratory activity as an indirect measure of coping style

    resulted in different clustering of individuals. These two

    dimensions together might explain individual vulnerability

    to anxiety.

    4. Neuroendocrine characteristics of coping styles

    Differences in coping style have been observed in male

    rodents during both social and non-social stressful conditions

    (see Table 2). Coping styles are not only characterized


    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935

    by differences in behavior but also by differences in physiology

    and neuroendocrinology. As mentioned earlier, tests

    that measure aspects of initiative or proactivity seem to be

    most discriminative. The defensive burying tests in rodents

    is such a test, which allows the animal a choice between

    proactive and reactive coping. In general, defensive burying

    is accompanied by high plasma noradrenaline and relatively

    low plasma adrenaline and corticosterone, while freezing

    behavior is associated with relatively low plasma noradrenaline

    and high plasma corticosterone levels [28,29]. In a

    strain of wild-type rats, the more aggressive males showed

    the highest levels of burying behavior and showed a larger

    catecholaminergic (both plasma noradrenaline and adrenaline)

    reactivity after electrified prod exposure and after social

    defeat than did the non-aggressive rats [30]. Previously, it

    was shown that during social defeat the more competitive

    proactive male rats reacted with higher responses of blood

    pressure and catecholamines than the more reactive rats. In

    addition, these competitive males had higher baseline levels

    of noradrenaline [31]. The same holds for strain differences.

    The aggressive Wild type-rats responded to social defeat

    with larger sympathetic (plasma noradrenaline levels) reactivity

    and concomitantly lower parasympathetic reactivity

    (as measured by increased heart rate response and decreased

    heart rate variability) than the less aggressive Wistar rats

    [32]. Thus, proactive coping rodents show in response to

    stressful stimulation a low HPA-axis reactivity (low plasma

    corticosterone response), but high sympathetic reactivity

    (high levels of catecholamines). In contrast, reactive coping

    rodents show higher HPA axis reactivity and higher parasympathetic

    reactivity (Table 3).

    Differences in endocrine activity have also been observed

    for HPA axis and gonadal axis activity under baseline conditions.

    In aggressive mice, reduced circadian peak plasma

    corticosterone levels have been observed as compared to

    non-aggressive mice [33]. In mice of the short attack latency

    selection line and in wild-type male rats, high baseline levels

    of testosterone have been observed [34,35].

    There is a growing body of evidence that similar coping

    styles can be found in farm animals as well. Hessing and

    colleagues showed that male castrated pigs could be characterized

    as high resistant or low resistant at an early age

    (1–2 weeks) by means of a back-test (manual restraint) [36].

    In this back-test, a piglet is put on its back and the number of

    bouts of resistance is used to characterize the animal. The

    high-resistant pigs made more escape attempts and mean

    heart rate frequency was higher than in low-resistant pigs

    [36]. At three and at eight weeks of age, the high-resistant

    ones were less inhibited in approaches to novel objects in an

    open field. But the high-resistant pigs spent less time in

    exploring the novel object than low-resistant pigs [36].

    Heart rate frequency of high-resistant pigs was also substantially

    increased in reaction to a falling novel object, while

    heart rate frequency of low-resistant animals was only

    slightly increased or even decreased (bradycardia), suggesting

    that parasympathetic reactivity was higher in low-resistant

    pigs [37]. Hessing and colleagues did not find clear

    differences in HPA axis reactivity between the high- and

    low-resistant animals, although basal plasma cortisol levels

    were higher in low-resistant than in high-resistant pigs and

    this was accompanied by adrenal hypertrophy. Recently,

    however, Ruis et al. [20,21] showed clear differences in

    HPA axis reactivity in high- and low-resistant female

    pigs. The low-resistant animals had higher HPA axis reactivity

    than the high-resistant ones. This was shown by higher

    salivary cortisol responses to a novel environment test, to

    routine weighing at 25 weeks of age, and to administration

    of a high dose of ACTH [20]. Interestingly, the low-resistant

    animals with high HPA axis reactivity at 24 weeks of age,

    showed less aggression in group-feeding competition tests,

    hesitated longer to leave their home pens and to contact a

    human than did high-resistant animals. Altogether, pigs that

    showed high resistance in the back-test and low HPA-axis

    reactivity and high sympathetic reactivity in response to

    stressful stimulation are thought to be representatives of

    the proactive coping style. In contrast, pigs that showed

    low resistance in the back-test and high HPA axis reactivity

    and high parasympathetic reactivity are thought to be representatives

    of the reactive coping style.

    Recently, itwas shown that laying hens, fromtwo lines with

    high or low propensity to feather peck, also show individual

    differences in physiological and behavioral responses to stress

    that are similar to the described coping styles. During manual

    restraint (keeping the bird on its side by hand for 8 min), the

    high feather pecking line showed more resistance and higher

    mean heart rate frequency and lower parasympathetic reactivity

    than the low feather pecking line [38,39]. HPA axis reactivity

    (plasma corticosterone levels) was highest in the low

    feather pecking line, while the sympathetic reactivity (plasma

    noradrenaline levels) was the highest in the high feather pecking

    line. These data suggest that chickens of the high feather

    pecking line are representatives of the proactive coping style,

    whereas birds of the low feather pecking line are representatives

    of the reactive coping style.

    5. Causal relationship between neuroendocrine and

    behavioral characteristics of coping

    One may wonder to what extent the behavioral and

    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935


    Table 3

    Summary of the physiological and neuroendocrine differences between

    proactive and reactive animals

    Physiological and neuroendocrine characteristics

    Proactive Reactive References

    HPA axis activity Low Normal [33,38,56,70,86]

    HPA axis reactivity Low High [20,28,56,87]

    Sympathetic reactivity High Low [38,77,88]

    Parasympathetic reactivity Low High [37,39]

    Testosterone activity High Low [34,35]

    physiological characteristics are causally related. Of course,

    it is highly unlikely that all differences in coping style can

    be reduced to one single causal factor. However, evidence is

    accumulating that a differential HPA axis reactivity may

    explain some of the behavioral differences. In different

    species, freezing behavior as part of the reactive coping

    response can be observed in response to an inescapable

    stressor or predator. In rats, a large number of studies

    have shown that corticosteroids play a permissive role in

    this fear-induced freezing behavior. Adrenalectomy (ADX)

    impaired the duration of fear-induced freezing compared to

    sham-ADX controls, suggesting the involvement of adrenal

    hormones. This behavioral deficit in ADX animals could be

    restored by the application of corticosterone [40]. In line

    with these experiments, treatment with metyrapone, a corticosteroid

    synthesis inhibitor, reduced fear-induced freezing

    behavior, suggesting that corticosterone is a key hormone in

    the expression of fear-induced immobility [41]. Since corticosterone

    can bind to both the mineralocorticoid and glucocorticoid

    receptor, further experiments were performed to

    find out which specific receptor type was involved. Intracerebroventricularly

    administered mineralocorticoid receptor

    antagonist RU28318 reduced the fear-induced freezing

    response, whereas the glucocorticoid receptor antagonist

    RU38486 was without effect [42]. The modulation of freezing

    via a mineralocorticoid receptor-dependent mechanism

    did not come as a surprise. Limbic mineralocorticoid receptors

    bind corticosterone with about 10 times higher affinity

    than glucocorticoid receptors, and low circulating levels of

    the corticosteroid hormone almost completely occupy

    mineralocorticoid receptors [43,44]. The biological background

    could be that a glucocorticoid receptor-dependent

    mechanism would have been too slow because glucocorticoid

    receptors are only occupied at much higher hormone

    levels that are reached several minutes after the stressor. In

    nature, the permissive steroid mineralocorticoid receptor

    action makes an immediate freezing response possible

    during a sudden appearance of a predator. There is a growing

    body of evidence that corticosteroids also play a role in

    fear-induced behavioral inhibition in farm animals. In

    laying hens, it has been shown that the birds with the shortest

    tonic immobility response have the lowest corticosterone

    levels [45]. Further, chronic administration of corticosterone

    moderately increased plasma levels of corticosterone

    and prolonged the tonic immobility reaction in hens

    suggesting a causal role for corticosteroids [46]. Also, in

    dairy calves, a positive correlation was observed between

    plasma cortisol levels and the latency to approach a novel

    object (van Reenen, unpublished observation).

    6. Coping styles and differences in disease vulnerability

    The concept of coping styles implies that animals have a

    differential way to adapt to various environmental conditions.

    Negative health consequences might arise if an animal

    cannot cope with the stressor or needs very demanding

    coping efforts. Sustained over-activation of various

    neuroendocrine systems may lead to specific types of

    pathology. Hence, in view of the differential neuroendocrine

    reactivity and neurobiological make-up, one may expect

    different types of stress-pathology to develop under conditions

    in which a particular coping style fails. Although there

    are only a limited number of studies performed concerning

    pathology in relation to the type of coping style adopted,

    there are some indications that the two coping styles differ

    in susceptibility to develop cardiovascular pathology, ulcer

    formation, stereotypies and infectious disease.

    6.1. Cardiovascular disease

    Various studies emphasize the differences between the

    two coping styles in autonomic balance. Because of the

    role of the two branches of the autonomic nervous system

    in cardiovascular control, one may expect in conditions of

    over-activation of these systems, a differential vulnerability

    for various types of cardiovascular pathology as well.

    Indeed, a number of experiments found evidence that the

    proactive coping animal is more vulnerable to develop

    hypertension, atherosclerosis and tachyarhythmia due to

    the high sympathetic reactivity [32,37,47–49]. However,

    hypertension has never been observed after conditions of

    uncontrollable stress. In social groups, hypertension generally

    develops in dominant or subdominant males that have

    difficulties to maintain their social position. Therefore, it

    seems that these types of cardiovascular pathology only

    develop under conditions of threat to control rather than

    loss of control [15]. The reactive coping style seems to be

    characterized by a shift in the autonomic balance towards a

    higher parasympathetic tone and reactivity as can be

    observed by a strong bradycardia response in reaction to a

    sudden unpredicted stressor. Although there have been no

    systematic studies of the cardiovascular consequences of

    this characteristic, one may suggest that these types of

    animals are more vulnerable to sudden cardiac death due

    to bradyarhythmia.

    6.2. Gastric ulceration and stereotopies

    There are numerous studies to indicate that the controllability

    of stressors is an extremely important factor in ulcer

    formation. The development of ulcers is low when animals

    are able to actively control or predict the stressor or divert

    their attention away from the stressor. For example, if rats

    can terminate the inescapable shock, or can chew wood

    during inescapable shock [50], or can bite on a wooden

    stick during cold restraint stress less, ulcers are observed


    The classical studies of Weiss [50] showed that the development

    of ulcers was high when the number of active

    coping attempts was high in the absence of informational

    feedback or with negative informational feedback present.

    In the experimental animal that could actively control the


    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935

    aversive shock by either pressing a lever during the warning

    signal or during the shock itself, the total length of stomach

    wall erosions was much smaller than in the yoked partner,

    which received exactly the same number of shocks, but

    could not control them. Also, when a feedback tone was

    given after each correct avoidance–escape response, the

    amount of gastric ulceration was further reduced. However,

    when brief punishment shock was given to the avoidance–

    escape and yoked animals whenever an avoidance–escape

    was made, then the avoidance–escape group showed more

    severe ulcer formation than the yoked partners. Further, in

    the absence of informational feedback, a positive correlation

    was observed between the number of active coping

    attempts and the amount of gastric ulceration.

    In line with these results is an observation in Roman high

    avoidance (RHA) and Roman low avoidance (RLA) rats,

    which can be considered to represent the proactive and

    reactive coping style, respectively. It was shown that RHA

    rats, after stress of food-deprivation for five days, had more

    stomach lesions than RLA rats [52]. A negative correlation

    between attack latency in the intruder test and gastric

    ulceration induced by restraint-in-water stress [53], also

    suggests that animals that prefer a proactive coping style

    are more vulnerable to the formation of ulcers during

    uncontrollable stress. In rat colonies, dominant animals

    that are usually representatives of the proactive coping

    style are reported to develop stomach wall erosions when

    they have lost their leading position (social outcast) after

    frequent attacks by other colony members [15].

    Another example of a possible relationship between

    behavioral coping characteristics and pathology has been

    found in veal calves. It was shown that veal calves fed

    only with milk developed tongue-playing as a stereotypy

    [54]. However, not all calves did this with the same intensity.

    Those calves that developed a lot of oral stereotypies

    showed less stomach wall ulcers when slaughtered at 20

    weeks of age. However, calves that did not develop

    tongue-playing, all had such ulcers at the same slaughter

    age [54]. Recently these results were confirmed in a larger

    study involving 300 veal calves (van Reenen et al., in

    preparation). Also in tethered breeding sows that were

    housed individually, two separate groups could be distinguished:

    some cows spent up to 80% of their active time in

    this behavior while others hardly developed stereotypies.

    Surprisingly, the sows that showed less initial resistance

    in the back-test were the ones to develop high levels of

    stereotypy later on [55]. Recently, it was shown that high

    levels of stereotypies are associated with a reduced sympathetic

    activation caused by the chronic stress of tethering as

    was shown by a decrease in heart rate during bouts of stereotyped

    behavior. In this view, stereotypies help the animal to

    cope with the adverse situation of tethering [56].

    There is increasing evidence that individual animals that

    adopt the proactive or reactive coping style differ in

    sensitivity of the dopaminergic system and consequently

    they may differ in vulnerability to the development of

    stereotypies. For instance, in mice, the dopamine receptor

    agonist apomorphine produced a greater enhancement of

    stereotyped behavior in proactive coping animals than in

    reactive coping animals, suggesting that proactive coping

    animals may be associated with a more sensitive dopaminergic

    system [57]. Similar correlations were found in rat

    lines previously selected for high and low expression of

    stereotyped behavior (gnawing) in response to apomorphine.

    The apomorphine-susceptible rats showed more

    proactive coping behavior (fleeing), whereas the apomorphine-

    unsusceptible rats showed more reactive behavior

    (freezing) in reaction to an open-field [58]. A similar relation

    between coping style and stereotypy has been demonstrated

    in pigs. Individual proactive (high resistant) and

    reactive (low resistant) coping pigs can be distinguished

    in the back-test in which the reaction to manual restraint

    is measured [59]. Recently it was shown that the high-resistant

    pigs have a higher oral stereotypic response (snout

    contact-fixation with floor) to apomorphine as compared

    to low-resistant pigs [60]. Thus, also in pigs there is a relationship

    between coping style, sensitivity of the dopaminergic

    system and development of stereotypies. Moreover, it

    has been shown that the dopaminergic-sensitivity factor, i.e.

    the latency to initiate stereotypic gnawing induced by

    apomorphine, also predicted ulcerogenic vulnerability [61].

    The underlying mechanism of increased vulnerability of

    proactive coping animals to develop stereotypies is not well

    understood. Here it is hypothesized that altered HPA-axis

    regulation plays a crucial role in the development of stereotypies.

    In farm animals it has been suggested that stereotypies

    are performed to lower the state of arousal and anxiety

    and to lower corticosteroid levels; however, not all studies

    show this correlation [62]. A possible explanation for the

    conflicting data may be the differential effects of corticosteroid

    hormones at the stage when the stereotypy starts to

    develop and at the stage when a full-blown stereotypy

    continues to exist. It is hypothesized that stress levels of

    corticosteroids may enhance the acquisition and expression

    of stereotypies, whereas an already developed stereotypy

    may reduce corticosteroid levels. This is supported by the

    following two examples in rodents. First, amphetamine activates

    dopamine pathways and induces stereotyped behavior

    (e.g. gnawing) that can be potentiated by high levels of

    corticosterone [63]. This suggests that brain glucocorticoid

    receptors are involved. Moreover, corticosteroids sensitize

    the dopaminergic system, probably through binding to the

    glucocorticoid receptors [64]. Second, dopamine-depleting

    lesions of the caudate-putamen are associated with a reduction

    in stereotyped behavior but an enhanced corticosterone

    response [65]. Thus, glucocorticoids via glucocorticoid

    receptors may play an important role in the sensitization

    of the dopaminergic system. Interestingly, apomorphinesusceptible

    rats do differ in glucocorticoid receptor and

    mineralocorticoid receptor expression in different brain

    nuclei and have higher (and more prolonged) plasma

    ACTH and total plasma corticosterone responses than

    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935


    apomorphine-unsusceptible rats [58]. Further studies are

    needed to investigate whether these differences in corticosteroid

    receptor expression are responsible for the differences

    in sensitivity in the dopaminergic system and

    whether this is the underlying mechanism which, under

    conditions of severe stress, increases the vulnerability of

    the proactive coping animal to develop stereotypies.

    6.3. Immunological defense during coping or non-coping

    Contemporary psychoneuroimmunology emphasizes

    the role of the HPA axis and the sympathetic branch of

    the autonomic nervous system in communication between

    the brain and the immune system [66]. In view of the differential

    reactivity of these two systems in the two coping

    styles, one may expect to see differences in the immune

    system as well. Indeed, several studies in rats and mice

    demonstrate that individual differentiation in coping is an

    important factor in stress and immunity. In the social stress

    models in particular, the individual level of social activity

    seems to be an important explanatory variable in some

    studies [67,68]. Although these studies do not specifically

    address the issue of coping styles, it is tempting to consider

    the possibility that these socially active animals represent

    the proactive coping style. Sandi et al. [69] specifically

    addressed the question of the significance of individual

    differentiation in emotional responsiveness to the differentiation

    in immunology. They used the Roman-high (RHA)

    and low-avoidance (RLA) rats that have been genetically

    selected on the basis of their active avoidance behavior [70].

    These selection lines have been shown to differ in a number

    of behavioral and neuroendocrine stress responses in a

    similar way as the proactive and reactive coping styles

    mentioned above. It was shown that the NK cell activity

    and the proliferation response of splenocytes to mitogenic

    stimulation was lowest in the RLA males, a difference that

    was even more pronounced after the stress of active shockavoidance

    learning. Other evidence that emotionality may

    interact with the immunological response has been found in

    dairy cows. During endotoxin mastitis in cows that were

    socially isolated, animals that were selected one year earlier

    for a strong adrenocortical response to isolation, showed a

    significantly larger reduction in peripheral blood lymphocyte

    numbers than cows that were previously classified as

    weak responders [71].

    In a study of pigs, Hessing [72] demonstrated that aggressive,

    resistant pigs had a higher in vivo and in vitro cell

    mediated immune response to specific and non-specific antigens

    than non-aggressive, non-resistant pigs. After stress,

    the aggressive, resistant pigs showed the strongest immunosuppression.

    This difference in immunological reactivity in

    relation to coping style may explain the differential disease

    susceptibility associated with social rank in group-housed

    pigs after challenge with Aujeszky virus. These observations

    in pigs are consistent with similar data obtained in

    colony housed male rats [67]. Finally, a recent observation

    shows that proactive coping male rats are more vulnerable

    to the experimental induction of the autoimmune disease

    EAE (experimental allergic encephalomyelitis), which is

    considered to be an animal model for multiple sclerosis in

    humans. This high vulnerability seems to be due to the high

    sympathetic reactivity in the proactive coping males [73].

    7. Concluding comments

    The concept of coping style has been frequently used in

    many studies and in an increasing number of species.

    However, only a few studies have a sufficiently broad

    approach to the individual behavioral and physiological

    characteristics and their consistency over time to be conclusive

    on the generality of the typology across species. Nevertheless,

    the available literature makes it tempting to consider

    the possibility that the distinctions between proactive and

    reactive coping styles represent rather fundamental biological

    trait characteristics that can be observed in many

    species. Species or strains may differ in their degree of

    differentiation depending on the strength of the selection

    pressure in nature or in the laboratory (genetic selection,

    domestication), but the extremes within a certain population

    differ generally in the same behavioral and physiological

    parameters and in the same direction. This may be partially

    due to the possibility that some of the characteristics share a

    common causal physiological basis. The few studies in feral

    populations suggest that the individual differentiation in

    coping style may be highly functional in population

    dynamics. Phenotypes within one species seem to have a

    differential fitness depending on the environmental conditions

    such as population density, social stability, food availability,

    etc. This idea is strengthened by the ecological

    studies of Wilson and co-authors [74,75]. Although these

    authors use the term shyness and boldness to indicate individual

    differences within a population, they argue that this

    differentiation represents adaptive individual differences in

    resource use and response to risk.

    Different coping styles are based on a differential use of

    various physiological and neuroendocrine mechanisms.

    The general impression is that these mechanisms vary in

    the same direction consistently over species. However, the

    degree of variation and the organizational level at which the

    variation is expressed may differ. It is likely that genetic

    selection will artificially exaggerate trait characteristics up

    to a level, which may not normally be found in a natural

    population. There are certainly more dimensions that may

    account for the individual differentiation in behavior and

    physiology. It will be a major challenge for behavioral

    physiologists to refine the scales for individual differences

    in order to improve their predictive power for health,

    welfare and disease.

    Little is currently known about the origin of coping

    styles. The few genetic selection lines that have been sufficiently

    characterized both behaviorally and physiologically


    J.M. Koolhaas et al. / Neuroscience and Biobehavioral Reviews 23 (1999) 925–935

    indicate a strong genetic basis. Some recent studies suggest

    that perinatal factors might play a role as well. However, the

    use of genetic selection lines may overestimate the role of

    the genotype. Indeed, the fact that cross-fostering and

    embryo transfer did not affect aggressive behavior in our

    selection lines of mice indicates that these lines are devoid

    of any perinatal plasticity [76]. Unfortunately, the large

    number of recent studies on the influence of perinatal factors

    in adult stress-reactivity rarely considers a sufficiently wide

    spectrum of behavioral and physiological characteristics to

    be conclusive on the effects on coping styles as a coherent

    set of characteristics. The same holds for the influence of

    adult (social) experiences. Clearly, stress at an adult age

    may produce enduring changes in behavior and physiology.

    Whether it changes coping styles as a trait characteristic is

    virtually unknown. So far, we prefer to consider coping

    styles as rather stable trait characteristics originating from

    genetic factors in combination with epigenetic factors early

    in life. Experiences in adult life may alter the state of the

    animal for a long period of time as expressed in some behavioral

    and physiological parameters, but they do not seem to

    change the coping style as a whole. In contrast, coping style

    is not a rigid characteristic that allows the individual only to

    respond according to one specific coping style in all situations.

    The absence of sawdust in the defensive burying test,

    for example, also elicits freezing behavior in the proactive

    animal. In this case, the environment was restrictive and

    consequently the animal did not follow its preferred coping


    The available evidence so far confirms the idea that

    coping style helps to determine individual vulnerability

    for stress-related disease. First, the concept implies that

    animals may be differentially adapted to different environmental

    conditions. Second, the differences in physiological

    reactivity make the two coping styles vulnerable to different

    types of disease. Hence, in our view, psychopathology can

    only be understood as a function of the individual coping

    style and the environmental demands. Understanding this

    complex relationship is of crucial importance in understanding

    human and animal health and welfare.