The trick to persuading people you’re right, according to experimental psychology

Feb 24 2018

http://ift.tt/2nO7Qle

Hugo Mercier in Quartz:

Man-thinkingIt’s an old trope that humans don’t like change—especially when it comes to their opinions. Ancient Greek philosophers complained about the masses refusing to heed their advice. Scholars spearheading the scientific revolution in the 17th century bemoaned their predecessors’ stubbornness. And today, everybody complains about their brother-in-law who won’t admit his political opinions are deeply misinformed.

Some findings in experimental psychology bolster this view of humans being pigheaded. In countless studies, psychologists have recorded people’s opinions on subjects from offal-eating to vaccination, exposed them to a message that critiqued their opinion, and then observed changes in their opinion. Some messages proved to be persuasive and others barely had an effect—but most surprisingly, some strong arguments backfired, with people moving their opinion away from the view advocated, rather than toward it.

This is a scary prospect. If being exposed to divergent political views ends up reinforcing entrenched opinions rather than altering them, there will be no end to the current increase in political polarization.

More here.

from 3quarksdaily http://ift.tt/2plv931
http://ift.tt/2pKprYD

Oliver Sacks on What a Pacific Island Can Teach Us About Treating Ill People as Whole People

Feb 24 2018

http://ift.tt/2pdDZSP

On the redemptive acceptance of the terminally ill as “a living part of the community.”


Oliver Sacks on What a Pacific Island Can Teach Us About Treating Ill People as Whole People

“There is something in personal love, caresses, and the magnetic flood of sympathy and friendship, that does, in its way, more good than all the medicine in the world,” Walt Whitman wrote in his insightful meditation on healthcare and the human spirit, from which contemporary medicine has much to learn, after he volunteered as a nurse during the Civil War.

I thought of Whitman during a beautiful, bittersweet weekend of poetry with a dear, terminally ill friend who is living longer and more fully than her grim diagnosis had originally prophesied, largely thanks to her deliberate decision to immerse herself in such a “magnetic flood of sympathy and friendship,” to spend weekends reading poetry with people who love her.

This life-expanding power of sympathetic affection is what Oliver Sacks (July 9, 1933–August 30, 2015), a Whitman of medicine, explores in a passage from his thoroughly fantastic 1997 book The Island of the Colorblind (public library) — the extraordinary travelogue of mind and body, which gave us Dr. Sacks on evolving our notions of normalcy to include the differently abled.

Oliver Sacks by Bill Hayes from Insomniac City: New York, Oliver, and Me

In one particularly moving portion, Dr. Sacks recounts his encounter with a sixty-year-old woman named Tomasa, born the same year as he, who for the past quarter of her life had been living with lytico — a progressively paralytic disease endemic to the island of Guam, in some cases resembling ALS and in others Parkinson’s.

He describes Tomasa’s state at the time of their meeting:

She had already had lytico for fifteen years when he met her; it has advanced steadily since, paralyzing not only her limbs but the muscles of breathing, speech, and swallowing. She is now near the end, but has continued to bear it with fortitude, to tolerate a nasogastric tube, frequent choking and aspiration, total dependence, with a calm, unfrightened fatalism. Indeed a fatality hangs over her entire family—her father suffered from lytico, as did two of her sisters, while two of her brothers have parkinsonism and dementia. Out of eight children in her generation, five have been afflicted by the lytico-bodig.

And yet what made Tomasa’s condition extraordinary against the backdrop of Western medicine is how it was met by the community and her loved ones. Dr. Sacks writes:

Family, friends, neighbors, come in at all hours, read the papers to her, tell her the news, give her all the local gossip. At Christmas, the Christmas tree is put by her couch; if there are local fiestas or picnics, people gather in her room. She may scarcely be able to move or speak, but she is still, in their eyes, a total person, still part of the family and community. She will remain at home, in the bosom of her family and community, in total consciousness and dignity and personhood, up to the day of her death, a death which cannot, now, be too far off.

[…]

This acceptance of the sick person as a person, a living part of the community, extends to those with chronic and incurable illness, who may, like Tomasa, have years of invalidism. I thought of my own patients with advanced ALS in New York, all in hospitals or nursing homes, with nasogastric tubes, suction apparatus, sometimes respirators, every sort of technical support — but very much alone, deliberately or unconsciously avoided by their relatives, who cannot bear to see them in this state, and almost prefer to think of them (as the hospital does) not as human beings, but as terminal medical cases on full “life support,” getting the best of modern medical care. Such patients are often avoided by doctors too, written, even by them, out of the book of life.

Complement The Island of the Colorblind with Dr. Sacks on death and destiny, the power of music, choosing empathy over vengeance, his stirring recollection of his largehearted life, and this rare glimpse of his creative process.


donating = loving

Bringing you (ad-free) Brain Pickings takes me hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.


newsletter

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s most unmissable reads. Here’s what to expect. Like? Sign up.

from Brain Pickings http://ift.tt/2oG3MTs
http://ift.tt/1kpsDA6

How perspective shapes reality

Feb 24 2018

http://ift.tt/2pKuKHw

Models are always imperfect, and the ones we choose greatly shape our experience

Picture Jupiter’s moons orbiting the planet. Do you see small dots bouncing back and forth in straight lines as if bound to Jupiter by springs, as Galileo once did? Or an overhead view of small bodies circling the planet in elliptical orbits? Or maybe you see Jupiter and its moons in helical motion, each body careening through space and time on its own set path? None of these models is false – each one presents a truth about reality. But as this short animation from MinutePhysics demonstrates, the models that we embrace significantly shape our perspective, and can lead us to neglect other, equally valid representations of reality.

from Aeon http://ift.tt/2pKQlSM
http://ift.tt/2oyO8tT

What lurks beneath

Feb 24 2018

http://ift.tt/2oNyYy5
Towards the end of 1892, ‘Miss Lucy R’, a pale and delicate English governess living in Vienna, made her way to the surgery of a young neurologist on Berggasse 19 for the treatment of a ‘suppurative rhinitis’. Miss Lucy was tired, in low spirits and complained of ‘a muzzy head’. And though she had lost her sense of smell, she was endlessly tormented by the smell of burnt pudding.

Sigmund Freud was 36 years old when he began attending to Miss Lucy. Trained at the Salpêtrière Hospital in Paris by the great neurologist Jean-Martin Charcot, Freud had already published monographs on hypnosis, epilepsy, and cocaine, which he continued to self-administer for ‘vitality and capacity for work’. Now he was applying his able and imaginative mind to the mystery of hysteria – whose bewildering array of symptoms were still considered hereditary ‘stigmata’. Upon examining the 30-year-old governess, he found her physically healthy, save for her nose’s insensitivity to touch. What struck him most about this case was the recurrent smell of burnt pudding.

Freud rejected the possibility of an organic explanation, even though acrid or burning smells are commonly associated with migraines, epilepsy and sinus infections. Instead, he deduced that Miss Lucy’s hallucination was a ‘memory-symbol’, a psychic trace standing in for a forgotten or repressed trauma, possibly related to sexual seduction or abuse. ‘What I suspect,’ he told her bluntly, ‘is that you are in love with your master, the director, perhaps without being aware of it yourself, and that secretly you are nursing the hope that you really will take the place of the mother.’

Studies on Hysteria (1895), co-authored with his physician friend and mentor Josef Breuer, would prove to be Freud’s breakthrough work. The book, based on Miss Lucy and four other cases, led him to two important insights. First, the physical symptoms of hysteria were caused when intolerable ‘ideas’ were evicted from the conscious mind. Second, the most effective antidote to hysteria’s psychic befuddlement, the best way of returning the patient to ‘ordinary unhappiness’, was what Breuer’s patient Anna O dubbed ‘the talking cure’. Forget the hypnotic overtures that Freud had been dabbling with since his time with Charcot at La Salpêtrière – from now on, free association coupled with attentive listening would be the proprietary salve of psychoanalysis.

Freud’s early writings on hysteria garnered little fanfare from his clinical peers. On reading Studies, Richard von Krafft-Ebing, the chair of psychiatry at the University of Vienna, dismissed the so-called theory of hysteria, including Freud’s contention that symptoms often came from childhood molestation or abuse, as ‘a scientific fairy tale’. Similar misgivings were voiced by laboratory psychologists working to place their discipline on empirical foundations. To use a sobriquet coined by the combative psychologist Edward Scripture, founder of the Yale Experimental Psychology Laboratory, Freud was an ‘armchair psychologist’, and his serial ruminations on ‘the unconscious’ – on dreams, on infantile sexuality, on jokes and parapraxes – reflected an equally unscientific ambition: that psychoanalysis would evolve as ‘a profession of lay curers of souls who need not be doctors and should not be priests’.

Four decades after treating Miss Lucy, Freud had permeated Western thought. He’d built a therapeutic empire by identifying the id, ego and superego as the forces of a ‘power struggle’ between instinct and morality ‘going on deep within us’. Yet as Freud’s cultural stock rose, his writings remained testament to an elective blindness, showing imperial disregard for most of his philosophical precursors and peers. In all his major publications on the unconscious, from Studies through to Civilization and its Discontents (1930), Freud barely acknowledged the pioneering Pierre Janet, the French psychiatrist well-known for his theory that traumas caused personality to dissociate into conscious and unconscious parts. There was no mention of Friedrich Nietzsche, who held that the unconscious mind yielded the deepest truths, or of Arthur Schopenhauer, who identified will itself as unconscious. Freud all but ignored the experimental work on unconscious inference that Hermann von Helmholtz had undertaken since the 1840s. And he showed curmudgeonly disdain for the rival theories of his one-time acolytes and ultimate critics Alfred Adler (who put stock in feelings of inferiority) and Carl Jung (a proponent of archetypes inhabiting the unconscious).

In fact, despite Freud’s renown, several approaches to the unconscious had already been established before the advent of psychoanalysis. According to the Canadian psychiatrist and historian Henri Ellenberger, Freud & Co were merely the latest representatives of the ‘mythopoetic’, who sought reality in dreams and fantasies. Earlier theorists had regarded the unconscious as a secret recorder of impressions and sensations that lay beyond the narrow beam of consciousness, an incubator for creative, innovative and inspirational insights, and a gateway to the secondary or submerged personalities linked to somnambulism, hypnotism, hysteria and fugue states.

Other investigators grounded their work in mechanistic physiology, using the language of neurons and cortical excitation. By the mid-19th century, these researchers had much to say about the ‘latent’ and ‘automatic’ nature of acquired habits and actions, sparking a long-running debate on the whys and wherefores of ‘unconscious cerebration’. In The Physiology of Common Life (1860), the English philosopher George H Lewes observed:

In learning to speak a new language, to play on a musical instrument, or to perform unaccustomed movements, great difficulty is felt, because the channels through which each sensation has to pass have not become established; but no sooner has frequent repetition cut a pathway, than this difficulty vanishes; the actions become so automatic that they can be performed while the mind is otherwise engaged.

While Freud’s Gothic version of the unconscious was steeped in dream symbols, repressed wishes and hidden traumas, the unconscious as described by Lewes and other commentators chimed with the pragmatism found in works of Victorian psychology and self-help literature: the importance of turning useful actions into unthinking habits, of actively deciding upon one’s ‘second nature’. In his landmark textbook The Principles of Psychology (1890), William James insisted that almost all of our personal habits, including gait, voice and gesture, are fixed by the age of 20: ‘The more of the details of our daily life we can hand over to the effortless custody of automatism, the more our higher powers of mind will be set free for their own proper work.’

Get Aeon straight to your inbox

The first experimental demonstrations of this automatism came from an unlikely source. At the height of Britain’s table-tilting craze, in 1853, Michael Faraday set out to investigate why so many eminent and educated séance-goers attributed the uncanny movement of the table to electricity, magnetism or some other imponderable force. After commissioning a Regent Street instrument-maker to construct a table with levers that would secretly register downwards and oblique pressure from the hands of the sitters, Faraday’s trials at the Royal Institution furnished proof positive that table-turning was produced by ‘unconscious muscular action’.

Of course, Faraday’s ingenious ruse barely scratched the surface of the unconscious. More than a century after Gottfried Leibniz’s New Essays on Human Understanding had proposed that subliminal ‘petite perceptions’, not conscious will, accounted for most of our actions, mental philosophers and physiologists were still reaching for insights into the secret agency of the unconscious in everyday life. How exactly did a newly learned task become an unthinking habit? Why did a name or phrase that could not be recalled invariably spring to mind once one’s attention was directed elsewhere? Why were our emotions apparently, as the English physiologist William Carpenter wondered, so often ‘determined by circumstances of which the individual has no idea’?

With the rise of laboratory-based psychology in Germany, the questions that Leibniz had raised were effectively sidelined and the unconscious was, after a period of neglect, approached through an industrial doorway. Efficiency, not health or happiness, was the burning issue for a new generation of scientific psychologists who admonished the likes of Freud and James for ‘vague observation, endless speculation, and flimsy guesswork’.

As the new discipline travelled from Germany to the United States in the 1880s, former students of Wilhelm Wundt, who founded the first psychological laboratory at Leipzig, continued to attack the ‘mystical’ theorists of the ‘the unconscious’. Yet the subliminal processes involved in the acceleration of memory, perception and learning could not be ignored. ‘Rapid thought and quick action sometimes make all the difference between success and failure,’ Wundt’s erstwhile student, Edward Scripture, reminded his readers. ‘A man who can think and act in one half of the time that another man can, will accumulate mental or material capital twice as fast.’

Patients were invited to repose on a couch and talk about their problems while tethered to a galvanometer

Reaction-time research and learning-curve studies, which dominated scientific psychology in its early years, gave birth to a vast catalogue of tabulated statistics on the best methods of teaching telegraphy, shorthand, foreign languages, baseball and the piloting of aircraft. As Henry Ford inaugurated the era of mass production, opening the first moving assembly line at Highland Park in Michigan, American psychology was about to radically re-engineer the science of human behaviour. 

Inspired by Ivan Pavlov’s research on the conditioning of animal reflexes at the Institute of Experimental Medicine in St Petersburg, John Broadus Watson, a psychologist at Johns Hopkins University in Baltimore, published his landmark article ‘Psychology as the Behaviourist Views it’ (1913), in which he argued that environmental triggers were of far greater importance in shaping human action than heredity or constitution. Cementing his behaviourist claims with his infamous experiments on ‘Little Albert’, a nine-month-old boy who was conditioned to fear various objects, Watson shoved the hereditarians and depth psychologists aside. There was ‘no such thing as an inheritance of capacity, talent, temperament, mental constitution and characteristics’. No hidden unconscious, no latent potentialities. The message was simple: anyone could, given the right training, be what they wanted in the new Fordist US.

The rise of behavioural psychology did not put an end to the unconscious. In the land of the typing pool, department store and nickelodeon, habituation and automaticity were central to industrial and educational psychology. The possibility of uncovering unconscious mental processes drew its greatest impetus from technologies that could flash, flicker and hum at below the liminal threshold of perception, and instruments that could identify the physiological markers of sleep and its borderlands. For example, the invention of the electroencephalography by Hans Berger in 1929 revealed something that Freud and his fellow dreamworkers had missed. There were two different types of nightly mental activity. In REM sleep, there was muscle and eye movement that suggested the tracking of inner imagery. In non-REM sleep, there was little in the way of visual imagery or sequential narrative.

Back in 1900, soon after Freud published The Interpretation of Dreams (1899), a little-known New York neurologist named James Corning was wondering whether an electrified consulting room might help him to treat hysteria and nervous irritability. Corning’s strategy involved playing ‘spiritual’ music and projecting swirling images onto a screen at the foot of the chaise-longue while his patients slept. Corning reported some success with his multi-sensory treatment – particularly when he played the role of dream narrator, whispering suggestions to his sleeping patients – until his experiments were abandoned due to time and cost.

But other researchers carried the baton. At the Burghölzli Clinic in Zurich, Carl Jung, the future ‘crown prince’ of psychoanalysis, had employed a galvanometer, a device that registered changes in the electrical resistance of the skin via electrodes, to identify a shadowy, complex cluster of unconscious feelings, beliefs and thoughts. At Johns Hopkins University, the psychologist Joseph Jastrow constructed a so-called ‘automatograph’ to study unconscious motor activity. And, in an experiment that was warmly received by Freud, the Viennese neurologist Otto Poetzl used a series of flashing images to help support the idea that dreams were composed from the debris of ‘unremembered day’.

Inevitably, some of these psychological instruments found their way into the therapist’s consulting room. In the mid-1920s, Harold Lasswell, a psychoanalytically trained political scientist who went on to become a major figure in the study of propaganda, turned his office at the University of Chicago into a ‘therapeutic laboratory’. His patients were invited to repose on a couch and talk about their problems and preoccupations, all while tethered to a galvanometer and a foreboding bank of instruments that tracked their eye movements, shifting posture, blood pressure and pulse rate. Laswell’s objective was to meld reams of physiological data with therapy’s viva voce insights into the unconscious. Predictably enough, his experiments were not well-received by the Freudian community. As soon as the New York and Chicago psychoanalytic institutes caught wind of Laswell’s trials, they issued a ban on such technological intrusions in the consulting room.

Beyond the couch and the lab, inventors and hobbyists were also contriving ways of tapping the unconscious. In 1927, as Laswell undertook his first trials at the University of Chicago, a Czech-born entrepreneur brought ‘the Psycho Phone’ to market. Essentially a wax-cylinder phonograph connected to an alarm clock, Alois Benjamin Saliger’s ‘automatic suggestion machine’ claimed to ‘direct the vast powers of the unconscious mind during sleep’. Retailing at a little over $200, it played a dozen or so specially produced ‘affirmation records’ with titles such as ‘Prosperity’, ‘Normality’ and ‘Mating’. According to testimonials, the phone could deliver profound life changes, including success in business, improved health, even charisma.

Ultimately, interest in sleep-learning waned. But mid-century pop psychology continued to seed the benefits and risks of subliminal influence and learning. L Ron Hubbard’s Dianetics: The Modern Science of Mental Health (1950) recommended purging oneself of the hidden traumas, or engrams, that were stored deep within the ‘reactive mind’ to achieve a ‘state of Clear’. Norman Vincent Peale’s The Power of Positive Thinking (1952) coached its readers to develop self-confidence through the same kind of affirmations and autosuggestions that Saliger had used. And Vance Packard’s The Hidden Persuaders (1957) exposed the subliminal skullduggery of US advertisers and marketeers.

Among the rollcall of the experts featured in Packard’s overheated exposé was James Vicary, described as ‘the most genial and ingratiating of all the major figures operating independent depth-probing firms’. Vicary worked for a roster of blue-chip clients, and his dubious insights into the consumer mindset included the notion that women bake cakes as a surrogate for childbirth, and that the surfeit of consumer choice was triggering a ‘hypnoidal state’ in the supermarket aisles.

Just as Packard’s book was having its day in the sun, Vicary held a New York press conference to proclaim that his firm had boosted beverage and snack sales in a New Jersey movie theatre by subliminally flashing messages (‘Drink Coca-Cola’ and ‘Eat Popcorn’) on the screen. More than 45,000 moviegoers had been exposed to a secret hailstorm of 1/3,000th-second adverts flashed every five seconds, Vicary claimed, and the point-blank messages had increased sales of Coca-Cola and popcorn by more than 18 and 57 per cent respectively.

Vicary’s advertising experiments whipped up a media frenzy. Within weeks, a rival company, Precon, headed by the psychologist Robert Corrigan, announced a patent-pending subliminal device that also boasted educational and psychotherapeutic applications. Before the year was out, a deluge of complaints led members of congress to demand assurances from the Federal Communications Commission regarding the use of ‘deceptive advertising’.

The unconscious is an enabler, rolling up its sleeves to expand the number of operations we can perform without thinking of them

By 1958, it was apparent that Vicary’s subliminal research was a hoax, but the marketplace for tales of mind control, especially those of a scientific or literary bent, was unaffected by the exposé. Aldous Huxley’s Brave New World Revisited (1958), published in the aftermath of the subliminal advertising controversy, rushed to endorse the mythology of subliminal manipulation that he’d laid out 30 years earlier in his dystopian novel. Giving rather too much credence to tired propaganda that insisted that brainwashing had been taken to new Pavlovian extremes by the Chinese Communist Party, Huxley, an avid anti-Freudian, claimed that experimental psychologists had various modes of subconscious persuasion at their disposal, including drugs, hypnopaedia and subliminally flashed images. If nothing else, Huxley’s odd little diatribe proved that ‘the unconscious’ had become a factitious repository, home to a phantasmagoria of imagined spells and scientific half-truths.

The most credible insights into the unconscious were rather more measured, stemming from a loose collective of cognitive scientists who were beginning to map the mind as an information-processing system. At the forefront of this new movement was the Cambridge-educated psychologist Donald Broadbent. A specialist in human-machine interaction, who had studied the work of pilots, air-traffic controllers and Post Office letter-sorters, Broadbent’s book Perception and Communication (1958) outlined the sundry processes involved in ‘filtering information from the senses’, ‘passing it through a limited capacity channel’, and then storing it. Broadbent’s dust-dry writing was no match for the soaring prose of Freud or James, but his flowcharts of the cognitive filters and buffers involved in handling complex information helped shine a light on what has become known as the cognitive unconscious – the ability to perform multiple tasks, to make inferences without deliberation, to demonstrate memory without recollection, to selectively attend to one phenomenon from a whole theatre of sensory possibilities.

The eminent Freudian scholar Peter Gay notes that the ‘fundamentals’ of Freudian theory garnered ‘impressive experimental support’ in the mid-20th century. While a generation of psychologists, many of whom had undergone analysis, used the language of psychoanalysis as a crutch for their studies of perception, learning, imitation and frustration, there was no substantive evidence of deep, dynamic processes. For example, Jerome Bruner and Leo Postman’s highly influential study of ‘perceptual denial’ – the Freudian theory that recognition of anxiety-provoking stimuli in the laboratory was delayed due to repression – missed a far simpler explanation. Taboo words were less common than their neutral counterparts, and it was expectation that accounted for their participants’ delayed recognition.

The subterranean channels explored by cognitive psychology have proved more revealing. Over the past three decades, experiments in implicit memory and priming have hinted at the existence of different memory systems. The Princeton psychologist Anne Treisman and others have revised Broadbent’s initial filter-theory of attention, proposing that unattended stimuli are not completely filtered but attenuated. And there is now a vast literature on ‘mere exposure’, which suggests that repeated exposure to a stimulus, above or below the threshold of conscious awareness, plays a large part in the formation of personal preferences.

Freud’s ghost might still haunt a small corner of the modern-day psychological laboratory, but the lexicon of censorship and repression has not retained its explanatory currency. Studies of waking and sleeping unconscious processes suggest that deception is not, and has never been, the second self’s true forte. As the mathematician and philosopher Alfred North Whitehead sagely observed in the early days of psychoanalysis, the unconscious is essentially an enabler, quietly rolling up its sleeves to expand ‘the number of important operations that we can perform without thinking of them’.

from Aeon http://ift.tt/2pAF4oh

http://ift.tt/2oyO8tT

The Great Humanistic Philosopher and Psychologist Erich Fromm on What Self-Love Really Means and Why It Is the Basic Condition for a Sane Society

Feb 24 2018

http://ift.tt/2p880Tx

“In the experience of love lies the only answer to being human, lies sanity.”


The Great Humanistic Philosopher and Psychologist Erich Fromm on What Self-Love Really Means and Why It Is the Basic Condition for a Sane Society

“We are well advised to keep on nodding terms with the people we used to be, whether we find them attractive company or not,” Joan Didion famously wrote in making her case for the value of keeping a notebook. But many of us frequently find it hard enough to be on nodding terms even with the people we currently are. “We have to imagine a world in which celebration is less suspect than criticism,” psychoanalyst Adam Phillips wrote in contemplating the perils of self-criticism and how to break free from the internal critics that enslave us. And yet can we even imagine self-celebration — do we even know what it looks like — if we are so blindly bedeviled by self-criticism? Can we, in other words, celebrate what we cannot accept and therefore cannot love?

How to break this Möbius strip of self-rejection is what the great humanistic philosopher and psychologist Erich Fromm (March 23, 1900–March 18, 1980) explores in a portion of his timeless 1956 treatise The Sane Society (public library) — the source of Fromm’s increasingly timely wisdom on our best shot at saving ourselves from ourselves.

Erich Fromm

Fromm frames love as what he calls “the productive orientation” of the psyche, an “active and creative relatedness of man to his fellow man, to himself and to nature.” He writes:

In the realm of feeling, the productive orientation is expressed in love, which is the experience of union with another person, with all men, and with nature, under the condition of retaining one’s sense of integrity and independence. In the experience of love the paradox happens that two people become one, and remain two at the same time. Love in this sense is never restricted to one person. If I can love only one person, and nobody else, if my love for one person makes me more alienated and distant from my fellow man, I may be attached to this person in any number of ways, yet I do not love.

Art by Olivier Tallec from This Is a Poem That Heals Fish by Jean-Pierre Simeón

Just as self-compassion is the seedbed of compassion, Fromm argues that such all-inclusive love must begin with self-love:

If I can say, “I love you,” I say, “I love in you all of humanity, all that is alive; I love in you also myself.” Self-love, in this sense, is the opposite of selfishness. The latter is actually a greedy concern with oneself which springs from and compensates for the lack of genuine love for oneself. Love, paradoxically, makes me more independent because it makes me stronger and happier — yet it makes me one with the loved person to the extent that individuality seems to be extinguished for the moment. In loving I experience “I am you,” you — the loved person, you — the stranger, you — everything alive. In the experience of love lies the only answer to being human, lies sanity.

Fromm is careful to point out that in this “productive orientation,” love is not a passive abstraction but an active responsibility. Shortly before Martin Luther King, Jr. made his abiding case for the respectful and responsible love of agape, Fromm writes:

Productive love always implies a syndrome of attitudes; that of care, responsibility, respect and knowledge. If I love, I care — that is, I am actively concerned with the other person’s growth and happiness; I am not a spectator. I am responsible, that is, I respond to his needs, to those he can express and more so to those he cannot or does not express. I respect him, that is (according to the original meaning of re-spicere) I look at him as he is, objectively and not distorted by my wishes and fears. I know him, I have penetrated through his surface to the core of his being and related myself to him from my core, from the center, as against the periphery, of my being.

The Sane Society is an enormously insightful read in its totality. Complement it with Fromm on the art of living, the art of loving, and how to transcend the common laziness of optimism and pessimism, then revisit this animated primer on the difficult art of self-compassion.


donating = loving

Bringing you (ad-free) Brain Pickings takes me hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.


newsletter

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s most unmissable reads. Here’s what to expect. Like? Sign up.

from Brain Pickings http://ift.tt/2ozidK9

http://ift.tt/1kpsDA6

Whalevolution

Feb 24 2018

http://img.youtube.com/vi/F_lj8GDlX48/0.jpg

Watch as the whale becomes itself: slowly, slowly, from land to sea, through deep time

Descending from creatures that were terrestrial and then amphibious before they were aquatic, cetaceans (whales, dolphins and porpoises) possess some of the animal kingdom’s most fascinating evolutionary histories. This video from the UK artist Jordan Collver traces the evolution of the sperm whale from the amphibious Pakicetus to its present form. After depicting six distinct points in evolutionary history, Collver morphed his still illustrations into one another, incrementally, over ten minutes. The resulting animation, Whalevolution, emphasises that a single strand of evolutionary history isn’t characterised by a series of distinct species, but rather, as Charles Darwin put it, an ‘infinitude of connecting links’. You can find an abridged 25-second version of the animation here, and Collver’s six original illustrations here.

from Aeon http://ift.tt/2qX7LfK
http://ift.tt/2oyO8tT

Love and Will: The Great Existential Psychologist Rollo May on Apathy, Transcendence, and Our Human Task in Times of Radical Transition

Feb 24 2018

http://ift.tt/2r6RbpU

“In every act of love and will — and in the long run they are both present in each genuine act — we mold ourselves and our world simultaneously. This is what it means to embrace the future.”


Love and Will: The Great Existential Psychologist Rollo May on Apathy, Transcendence, and Our Human Task in Times of Radical Transition

“Real generosity toward the future lies in giving all to the present,” Albert Camus wrote in his 1951 meditation on what it really means to be a rebel. At the heart of this sentiment are the two complementary forces of love and will, for a loving regard for the future requires a willful commitment to rising to the problems of the present and transcending its tumults — a dependency as true in our personal lives as it is in our political lives, and one which demands a capacity for withstanding uncertainty.

That essential interrelation is what the great existential psychologist Rollo May (April 21, 1909–October 22, 1994) examined nearly two decades later in his influential 1969 book Love and Will (public library).

Rollo May

Drawing on his quarter-century experience as a psychoanalytic therapist working with people trying to wrest from their inner turmoil an existential serenity, May writes:

Love and will are interdependent and belong together. Both are conjunctive processes of being — a reaching out to influence others, molding, forming, creating the consciousness of the other. But this is only possible, in an inner sense, if one opens oneself at the same time to the influence of the other.

Writing half a century ago, May examines the consequence of warping the balance of love and will, speaking with astonishing precision to and of our own time:

The fruits of future values will be able to grow only after they are sown by the values of our history. In this transitional [time], when the full results of our bankruptcy of inner values is brought home to us, I believe it is especially important that we seek the source of love and will.

[…]

The striking thing about love and will in our day is that, whereas in the past they were always held up to us as the answer to life’s predicaments, they have now themselves become the problem. It is always true that love and will become more difficult in a transitional age; and ours is an era of radical transition. The old myths and symbols by which we oriented ourselves are gone, anxiety is rampant; we cling to each other and try to persuade ourselves that what we feel is love; we do not will because we are afraid that if we choose one thing or one person we’ll lose the other, and we are too insecure to take that chance. The bottom then drops out of the conjunctive emotions and processes — of which love and will are the two foremost examples. The individual is forced to turn inward; he becomes obsessed with the new form of the problem of identity, namely, Even-if-I-know-who-I-am, I-have-no-significance. I am unable to influence others. The next step is apathy. And the step following that is violence. For no human being can stand the perpetually numbing experience of his own powerlessness.

Art from Thin Slices of Anxiety: Observations and Advice to Ease a Worried Mind by Catherine Lepage

May argues that during times of radical transition, when the societal structures we’ve used as external guides begin to fall apart, we are apt to turn inward and rely on our own consciousness. Such times, therefore, become a critical testing ground for how well we are able to wield the complementary forces of love and will. This grand personal responsibility can swell into a source of anxiety which, upon reaching its most extreme and unbearable limit, festers into apathy — when we continually face dangers we feel powerless to overcome, we resort to this final self-defense mechanism of shutting down both love and will. And yet in these two capacities lies the sole mechanism of our salvation and sanity. May writes:

The interrelation of love and will inheres in the fact that both terms describe a person in the process of reaching out, moving toward the world, seeking to affect others or the inanimate world, and opening himself to be affected; molding, forming, relating to the world or requiring that it relate to him. This is why love and will are so difficult in an age of transition, when all the familiar mooring places are gone.

In a sentiment that parallels Hannah Arendt’s insight into how bureaucracy breeds violence, May adds:

There is a dialectical relationship between apathy and violence. To live in apathy provokes violence; and … violence promotes apathy. Violence is the ultimate destructive substitute which surges in to fill the vacuum where there is no relatedness… When inward life dries up, when feeling decreases and apathy increases, when one cannot affect or even genuinely touch another person, violence flares up as a daimonic necessity for contact, a mad drive forcing touch in the most direct way possible.

[…]

Apathy is the withdrawal of will and love … a suspension of commitment. It is necessary in times of stress and turmoil; and the present great quantity of stimuli is a form of stress. But apathy … leads to emptiness and makes one less able to defend oneself, less able to survive. However understandable the state we are describing by the term apathy is, it is also essential that we seek to find a new basis for the love and will which have been its chief casualties.

Art by Shaun Tan for a special edition of the Brothers Grimm fairy tales

May examines the antidote to apathy through the lens of the three central elements of love and will — eros, the ancient Greek manifestation of love that drives toward higher forms of being and relationship; the daimonic, which represents the intermediary between the divine and the mortal; and intentionality, the imagination’s drive to transmute individual impulses into interpersonal experience. He writes:

As the function of eros, both within us and in the universe itself, is to draw us toward the ideal forms, it elicits in us the capacity to reach out, to let ourselves be grasped, to preform and mold the future. It is the self-conscious capacity to be responsive to what might be. The daimonic, that shadowy side which, in modern society, inhabits the underground realms as well as the transcendent realms of eros, demands integration from us on the personal dimension of consciousness. Intentionality is an imaginative attention which underlies our intentions and informs our actions. It is the capacity to participate in knowing or performing the art proleptically — that is, trying it on for size, performing it in imagination. Each of these emphases points toward a deeper dimension in human beings. Each requires a participation from us, an openness, a capacity to give of ourselves and receive into ourselves. And each is an inseparable part of the basis of love and will.

With an eye to the future, which is now our present, May considers the pathway to finding such a fertile basis of love and will:

What is necessary … is a new consciousness in which the depth and meaning of personal relationship will occupy a central place. Such an embracing consciousness is always required in an age of radical transition. Lacking external guides, we shift our morality inward; there is a new demand upon the individual of personal responsibility. We are required to discover on a deeper level what it means to be human.

Echoing Alfred Kazin’s insistence on the necessity of embracing our contradictions, May adds:

The only way of resolving — in contrast to solving — the questions is to transform them by means of deeper and wider dimensions of consciousness. The problems must be embraced in their full meaning, the antinomies resolved even with their contradictions. They must be built upon; and out of this will arise a new level of consciousness.

In a sentiment of astonishing pertinence to our own tumultuous and transitional time, May frames our highest responsibility to ourselves and to the future:

The new age which knocks upon the door is as yet unknown, seen only through beclouded windows. We get only hints of the new continent into which we are galloping: foolhardy are those who attempt to blueprint it, silly those who attempt to forecast it, and absurd those who irresponsibly try to toss it off by saying that the “new man will like his new world just as we like ours.” … But whatever the new world will be, we do not choose to back into it. Our human responsibility is to find a plane of consciousness which will be adequate to it and will fill the vast impersonal emptiness of our technology with human meaning.

Echoing Bertrand Russell’s abiding assertion that “not all wisdom is new, nor is all folly out of date,” May adds:

We stand on the peak of the consciousness of previous ages, and their wisdom is available to us. History — that selective treasure house of the past which each age bequeaths to those that follow — has formed us in the present so that we may embrace the future. What does it matter if our insights, the new forms which play around the fringes of our minds, always lead us into virginal land where, like it or not, we stand on strange and bewildering ground. The only way out is ahead, and our choice is whether we shall cringe from it or affirm it.

For in every act of love and will — and in the long run they are both present in each genuine act — we mold ourselves and our world simultaneously. This is what it means to embrace the future.

Love and Will is an illuminating read in its totality. Complement it with the great humanistic philosopher and psychologist Erich Fromm, a contemporary of May’s, on the art of living, Nobel-winning writer Toni Morrison on the creative person’s task in volatile times, and philosopher Martha Nussbaum on how to live with our human fragility.


donating = loving

Bringing you (ad-free) Brain Pickings takes me hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.


newsletter

Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s most unmissable reads. Here’s what to expect. Like? Sign up.

from Brain Pickings http://ift.tt/2quaKwf
http://ift.tt/1kpsDA6

Neuroscientists working on the “hard problem” of consciousness may be doomed to fail. But there is meaning — even pleasure — in the Sisyphean task

Feb 24 2018

http://ift.tt/2skG2nv

Early in my neurology residency, a 50-year-old woman insisted on being hospitalized for protection from the FBI spying on her via the TV set in her bedroom. The woman’s physical examination, lab tests, EEGs, scans, and formal neuropsychological testing revealed nothing unusual. Other than being visibly terrified of the TV monitor in the ward solarium, she had no other psychiatric symptoms or past psychiatric history. Neither did anyone else in her family, though she had no recollection of her mother, who had died when the patient was only 2.





The psychiatry consultant favored the early childhood loss of her mother as a potential cause of a mid-life major depressive reaction. The attending neurologist was suspicious of an as yet undetectable degenerative brain disease, though he couldn’t be more specific. We residents were equally divided between the two possibilities.

Fortunately an intern, a super-sleuth more interested in data than speculation, was able to locate her parents’ death certificates. The patient’s mother had died in a state hospital of Huntington’s disease—a genetic degenerative brain disease. (At that time such illnesses were often kept secret from the rest of the family.) Case solved. The patient was a textbook example of psychotic behavior preceding the cognitive decline and movement disorders characteristic of Huntington’s disease.

WHERE’S THE MIND?: Wilder Penfield spent decades studying how brains produce the experience of consciousness, but concluded “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.”Montreal Neurological Institute

As a fledgling neurologist, I’d already seen a wide variety of strange mental states arising out of physical diseases. But on this particular day, I couldn’t wrap my mind around a gene mutation generating an isolated feeling of being spied on by the FBI. How could a localized excess of amino acids in a segment of DNA be transformed into paranoia?

Though I didn’t know it at the time, I had run headlong into the “hard problem of consciousness,” the enigma of how physical brain mechanisms create purely subjective mental states. In the subsequent 50 years, what was once fodder for neurologists’ late night speculations has mushroomed into the pre-eminent question in the philosophy of mind. As an intellectual challenge, there is no equal to wondering how subatomic particles, mindless cells, synapses, and neurotransmitters create the experience of red, the beauty of a sunset, the euphoria of lust, the transcendence of music, or in this case, intractable paranoia.

I couldn’t wrap my mind around a gene mutation generating an isolated feeling of being spied on by the FBI.

Neuroscientists have long known which general areas of the brain and their connections are necessary for the state of consciousness. By observing both the effects of localized and generalized brain insults such as anoxia and anesthesia, none of us seriously doubt that consciousness arises from discrete brain mechanisms. Because these mechanisms are consistent with general biological principles, it’s likely that, with further technical advances, we will uncover how the brain generates consciousness.

However, such knowledge doesn’t translate into an explanation for the what of consciousness—that state of awareness of one’s surroundings and self, the experience of one’s feelings and thoughts. Imagine a hypothetical where you could mix nine parts oxytocin, 17 parts serotonin, and 11 parts dopamine into a solution that would make 100 percent of people feel a sense of infatuation 100 percent of the time. Knowing the precise chemical trigger for the sensation of infatuation (the how) tells you little about the nature of the resulting feeling (the what).

Over my career, I’ve gathered a neurologist’s working knowledge of the physiology of sensations. I realize neuroscientists have identified neural correlates for emotional responses. Yet I remain ignorant of what sensations and responses are at the level of experience. I know the brain creates a sense of self, but that tells me little about the nature of the sensation of “I-ness.” If the self is a brain-generated construct, I’m still left wondering who or what is experiencing the illusion of being me. Similarly, if the feeling of agency is an illusion, as some philosophers of mind insist, that doesn’t help me understand the essence of my experience of willfully typing this sentence.

Slowly, and with much resistance, it’s dawned on me that the pursuit of the nature of consciousness, no matter how cleverly couched in scientific language, is more like metaphysics and theology. It is driven by the same urges that made us dream up gods and demons, souls and afterlife. The human urge to understand ourselves is eternal, and how we frame our musings always depends upon prevailing cultural mythology. In a scientific era, we should expect philosophical and theological ruminations to be couched in the language of physical processes. We argue by inference and analogy, dragging explanations from other areas of science such as quantum physics, complexity, information theory, and math into a subjective domain. Theories of consciousness are how we wish to see ourselves in the world, and how we wish the world might be.

My first hint of the interaction between religious feelings and theories of consciousness came from Montreal Neurological Institute neurosurgeon Wilder Penfield’s 1975 book, Mystery of the Mind: A Critical Study of Consciousness and the Human Brain. One of the great men of modern neuroscience, Penfield spent several decades stimulating the brains of conscious, non-anesthetized patients and noting their descriptions of the resulting mental states, including long-lost bits of memory, dreamy states, deju vu, feelings of strangeness, and otherworldliness. What was most startling about Penfield’s work was his demonstration that sensations that normally qualify how we feel about our thoughts can occur in the absence of any conscious thought. For example, he could elicit feelings of familiarity and strangeness without the patient thinking of anything to which the feeling might apply. His ability to spontaneously evoke pure mental states was proof positive that these states arise from basic brain mechanisms.

And yet, here’s Penfield’s conclusion to his end-of-career magnum opus on the nature of the mind: “There is no good evidence, in spite of new methods, that the brain alone can carry out the work that the mind does.” How is this possible? How could a man who had single-handedly elicited so much of the fabric of subjective states of mind decide that there was something to the mind beyond what the brain did?

In the last paragraph of his book, Penfield explains, “In ordinary conversation, the ‘mind’ and ‘the spirit of man’ are taken to be the same. I was brought up in a Christian family and I have always believed, since I first considered the matter … that there is a grand design in which all conscious individuals play a role … Since a final conclusion … is not likely to come before the youngest reader of this book dies, it behooves each one of us to adopt for himself a personal assumption (belief, religion), and a way of life without waiting for a final word from science on the nature of man’s mind.”

Front and center is Penfield’s observation that, in ordinary conversation, the mind is synonymous with the spirit of man. Further, he admits that, in the absence of scientific evidence, all opinions about the mind are in the realm of belief and religion. If Penfield is even partially correct, we shouldn’t be surprised that any theory of the “what” of consciousness would be either intentionally or subliminally infused with one’s metaphysics and religious beliefs.

To see how this might work, take a page from Penfield’s brain stimulation studies where he demonstrates that the mental sensations of consciousness can occur independently from any thought that they seem to qualify. For instance, conceptualize thought as a mental calculation and a visceral sense of the calculation. If you add 3 + 3, you compute 6, and simultaneously have the feeling that 6 is the correct answer. Thoughts feel right, wrong, strange, beautiful, wondrous, reasonable, far-fetched, brilliant, or stupid. Collectively these widely disparate mental sensations constitute much of the contents of consciousness. But we have no control over the mental sensations that color our thoughts. No one can will a sense of understanding or the joy of an a-ha! moment. We don’t tell ourselves to make an idea feel appealing; it just is. Yet these sensations determine the direction of our thoughts. If a thought feels irrelevant, we ignore it. If it feels promising, we pursue it. Our lines of reasoning are predicated upon how thoughts feel.

Shortly after reading Penfield’s book, I had the good fortune to spend a weekend with theoretical physicist David Bohm. Bohm took a great deal of time arguing for a deeper and interconnected hidden reality (his theory of implicate order). Though I had difficulty following his quantum theory-based explanations, I vividly remember him advising me that the present-day scientific approach of studying parts rather than the whole could never lead to any final answers about the nature of consciousness. According to him, all is inseparable and no part can be examined in isolation.

In an interview in which he was asked to justify his unorthodox view of scientific method, Bohm responded, “My own interest in science is not entirely separate from what is behind an interest in religion or in philosophy—that is to understand the whole of the universe, the whole of matter, and how we originate.” If we were reading Bohm’s argument as a literary text, we would factor in his Jewish upbringing, his tragic mistreatment during the McCarthy era, the lack of general acceptance of his idiosyncratic take on quantum physics, his bouts of depression, and the close relationship between his scientific and religious interests.

Many of today’s myriad explanations for how consciousness arises are compelling. But once we enter the arena of the nature of consciousness, there are no outright winners.

Christof Koch, the chief scientific officer of the Allen Institute for Brain Science in Seattle, explains that a “system is conscious if there’s a certain type of complexity. And we live in a universe where certain systems have consciousness. It’s inherent in the design of the universe.”

If the self is a brain-generated construct, I’m left wondering who or what is experiencing the illusion of being me.

According to Daniel Dennett, professor of philosophy at Tufts University and author of Consciousness Explained and many other books on science and philosophy, consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons. He feels no need to postulate any additional physical component to explain the intrinsic qualities of our subjective experience.

Meanwhile, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology, tells us consciousness “is how information feels when it is being processed in certain very complex ways.” He writes that “external reality is completely described by mathematics. If everything is mathematical, then, in principle, everything is understandable.” Rudolph E. Tanzi, a professor of neurology at Harvard University, admits, “To me the primal basis of existence is awareness and everything including ourselves and our brains are products of awareness.” He adds, “As a responsible scientist, one hypothesis which should be tested is that memory is stored outside the brain in a sea of consciousness.”

Each argument, taken in isolation, seems logical, internally consistent, yet is at odds with the others. For me, the thread that connects these disparate viewpoints isn’t logic and evidence, but their overall intent. Belief without evidence is Richard Dawkins’ idea of faith. “Faith is belief in spite of, even perhaps because of, the lack of evidence.” These arguments are best read as differing expressions of personal faith.

For his part, Dennett is an outspoken atheist and fervent critic of the excesses of religion. “I have absolutely no doubt that secular and scientific vision is right and deserves to be endorsed by everybody, and as we have seen over the last few thousand years, superstitious and religious doctrines will just have to give way.” As the basic premise of atheism is to deny that for which there is no objective evidence, he is forced to avoid directly considering the nature of purely subjective phenomena. Instead he settles on describing the contents of consciousness as illusions, resulting in the circularity of using the definition of mental states (illusions) to describe the general nature of these states.

We have no control over the mental sensations that color our thoughts. No one can will the joy of an a-ha! moment.

The problem compounds itself. Dennett is fond of pointing out (correctly) that there is no physical manifestation of “I,” no ghost in the machine or little homunculus that witnesses and experiences the goings on in the brain. If so, we’re still faced with asking what/who, if anything, is experiencing consciousness? All roads lead back to the hard problem of consciousness.

Though tacitly agreeing with those who contend that we don’t yet understand the nature of consciousness, Dennett argues that we are making progress. “We haven’t yet succeeded in fully conceiving how meaning could exist in a material world … or how consciousness works, but we’ve made progress: The questions we’re posing and addressing now are better than the questions of yesteryear. We’re hot on the trail of the answers.”

By contrast, Koch is upfront in correlating his religious upbringing with his life-long pursuit of the nature of consciousness. Raised as a Catholic, he describes being torn between two contradictory views of the world—the Sunday view reflected by his family and church, and the weekday view as reflected in his work as a scientist (the sacred and the profane).

In an interview with Nautilus, Koch said, “For reasons I don’t understand and don’t comprehend, I find myself in a universe that had to become conscious, reflecting upon itself.” He added, “The God I now believe in is closer to the God of Spinoza than it is to Michelangelo’s paintings or the God of the Old Testament, a god that resides in this mystical notion of all-nothingness.” Koch admitted, “I’m not a mystic. I’m a scientist, but this is a feeling I have.” In short, Koch exemplifies a truth seldom admitted—that mental states such as a mystical feeling shape how one thinks about and goes about studying the universe, including mental states such as consciousness.

Both Dennett and Koch have spent a lifetime considering the problem of consciousness; though contradictory, each point of view has a separate appeal. And I appreciate much of Dennett and Koch’s explorations in the same way that I can mull over Aquinas and Spinoza without necessarily agreeing with them. One can enjoy the pursuit without believing in or expecting answers. After all these years without any personal progress, I remain moved by the essential nature of the quest, even if it translates into Sisyphus endlessly pushing his rock up the hill.

The spectacular advances of modern science have generated a mindset that makes potential limits to scientific inquiry intuitively difficult to grasp. Again and again we are given examples of seemingly insurmountable problems that yield to previously unimaginable answers. Just as some physicists believe we will one day have a Theory of Everything, many cognitive scientists believe that consciousness, like any physical property, can be unraveled. Overlooked in this optimism is the ultimate barrier: The nature of consciousness is in the mind of the beholder, not in the eye of the observer.

It is likely that science will tell us how consciousness occurs. But that’s it. Although the what of consciousness is beyond direct inquiry, the urge to explain will persist. It is who we are and what we do.

Robert A. Burton, a former chief of neurology at the University of California, San Francisco, Medical Center at Mount Zion, is the author of On Being Certain: Believing You Are Right Even When You’re Not, and A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves.

Lead collage credits: aleroy4 / Getty images; SpeedKingz / shutterstock

from Arts & Letters Daily http://ift.tt/2ru8FRN
http://ift.tt/2oQn4VC

Jonathan Haidt is famous for explaining how liberals and conservatives think. Now he’s wagering that social psychology can calm the campus culture war

Feb 24 2018

http://ift.tt/2sAi8EC

Timothy Fadek

Jonathan Haidt, a social psychologist at NYU’s business school, is a founder of Heterodox Academy, which promotes wider “viewpoint diversity” in academe.

On a February morning in Washington, a hotel ballroom is packed with people eager to hear Jonathan Haidt explain what’s wrong with higher education. His talk is part of the International Students for Liberty Conference, which has attracted 1,700 attendees, mostly young libertarians, to a weekend of sessions with titles like “Stereotyped 101,” “Advancing Liberty Around the World,” and “Beer Is Freedom.” Before he’s introduced, Haidt, a social psychologist at New York University’s Stern School of Business, stands at the front of the room, tall and thin, dressed in a dark suit and white shirt. As people gather around, a brown-haired woman in a gray skirt chats him up before rushing off. “Oh, my God,” she says to a friend, “I just shook Jonathan Haidt’s hand!”

Haidt’s renown is driven by bold declarations like those in a 2015 cover story in The Atlantic titled “The Coddling of the American Mind.” Written with Greg Lukianoff, president and chief executive of the Foundation for Individual Rights in Education (FIRE), the article took the rise of microaggressions, trigger warnings, and safe spaces as evidence that colleges are nurturing a hypersensitive mind-set among students that “will damage their careers and friendships, along with their mental health.” The article, which has been viewed by nearly six million people, catapulted Haidt, already a prominent scholar and best-selling author, into a new role: gadfly of the campus culture wars.

“We have this image of college as a bucolic time discussing ideas guided by learned faculty,” Haidt tells the crowd, “but weird stuff is happening.” He points his clicker and calls up a slide. On one side is a photograph of students marching for free speech in the 1960s at the University of California at Berkeley. On the other, a flaming pile of rubble from a protest in February to prevent the alt-right troll Milo Yiannopoulos from speaking at Berkeley. “The extremes, the far left and the far right, are being” — Haidt pauses a beat — “well, I’d say bizarre and crazy, but first, that would be a microaggression” — a roar of laughter from the audience — “and second, it would not be true. What’s happening isn’t crazy. It’s straight moral psychology.”

For the next hour, Haidt roams the stage, TED-talk style (he’s delivered four), and explains what he calls “the new moral culture spreading on many college campuses.” It is a culture, he says, that values victims, prioritizes emotional safety, silences dissent, and distorts scholarship. It is a culture that undermines the university’s traditional mission to pursue truth — “veritas” is right there on the seals of Harvard and Yale — in favor of a new mission: the pursuit of social justice. It is a culture that Haidt believes is fueled by three factors: political polarization, the rise of social media, and a lack of ideological diversity in the professoriate.

Haidt, already a prominent scholar and best-selling author, has been catapulted into a new role: gadfly of the campus culture wars.

Through the 1980s, Haidt says at the conference, liberals outnumbered conservatives on college faculties by about two to one. In his own field, psychology, a left/right disparity of four to one existed until the mid-1990s. “That’s not really a problem as long as there are some people on the right who can raise objections if someone says something that’s just overtly partisan and isn’t backed up by the facts,” he says. Today, however, precious few conservatives are in psychology departments. “If you say something pleasing to the left about race, gender, immigration, or any other issue, it’s likely to get waved through to publication,” says Haidt. “People won’t ask hard questions. They like it. They want to believe it.” This represents “a real research-legitimacy problem in the social sciences.”

Solving that problem has become a crusade for Haidt. In 2015 he co-founded Heterodox Academy to advocate for what its mission statement calls “viewpoint diversity.” The organization began as an online salon frequented by a few colleagues, but after high-profile student protests at the University of Missouri, Yale, and elsewhere, the ranks began to swell. The group now has more than 800 members, primarily tenured or tenure-track faculty. The active ones conduct research and distill their findings into blog posts, which has made the Heterodox Academy website a clearinghouse for data and views on academic bias, scientific integrity, and the latest campus free-speech flaps. Last year a quarter-million people visited the website.

Haidt has a team of three staffers with him at NYU and three part-timers who work on a more ad hoc basis. Initial support for Heterodox Academy came from two small donors, the Richard Lounsbery Foundation, best known for its support of the sciences, and the Achelis and Bodman Foundation, a tradition-minded backer of the arts and charter schools in New York City. This year the group received substantial support from Paul Singer, a hedge-fund billionaire active in Republican politics, which has allowed it to work with a Washington-area branding and public-relations firm. Haidt is cultivating a center-left donor and hopes to use those funds to rent office space and hire an executive director.

In the meantime, he fills that role. He’s an active presence on social media, with more than 50,000 Twitter followers, and he’s often quoted in major newspapers explaining the campus culture wars. The Wall Street Journal opinion section has published a flattering profile as well as several of his op-eds. When an appearance by Charles Murray led to protests and violence at Middlebury College, Haidt was booked on Charlie Rose to offer insight. He’s in such demand that he charges $30,000 per speech. At the Students for Liberty conference, Haidt explained that his activism is driven by a belief that the stakes could not be higher: “This could be the beginning of the end for liberal democracy.”

His critics, of whom there are many, see his efforts to shift the conversation about diversity away from race and gender and toward politics as at best obtuse and at worst hostile. They say his absolutist stance on free speech is at odds with the need for a diverse and inclusive university. They say he lends a social-scientific sheen to old conservative arguments. They say his penchant for skewering the left, coupled with his willingness to engage the right, is suspect and creates confusion about where his sympathies actually lie. They say he’s either a closet conservative or a useful idiot for the right.

Haidt acknowledges that, especially in the wake of Donald Trump’s election, he risks sounding like a guy in Berlin in 1933 insisting that wisdom is to be found on both sides of the political spectrum. “The election has ramped up emotions so strongly that any effort to say, ‘You really need to have more conservatives in the university, and you need to listen to them’ strikes some people as immoral.” On the other hand, he says, the election has forced a reckoning. More academics are saying, “Wow, we really are in a bubble. We must get out of this bubble.”

Haidt’s corner office at NYU’s business school has many bookshelves, a large, L-shaped desk, a dorm-size fridge, and a gray fainting sofa, on which he takes a daily 35-minute nap. Behind his desk is a window that looks out on the apartment building where he lives with his wife and two children. (Asked what brought him to NYU after 17 years at the University of Virginia, he says access to the media and subsidized rent.)

Campus politics can make for strange bedfellows, and for Haidt these are strange days. A few hours before we meet, he was a guest on Glenn Beck’s radio show. They exchanged friendly banter. On the show, Haidt called the former Fox News host one of his early teachers about conservatism; Beck credited Haidt’s 2012 book, The Righteous Mind (Pantheon), with changing his view of politics. Beck invited Haidt to come back the next day.

Critics see Haidt as complicit in the rise of the divisive politics he claims to abhor.

“For the record,” he tells me later, “I’ve never voted for a Republican, never given a penny to a Republican candidate, never worked for a Republican or conservative cause.” On the left in the early 2000s, he grew frustrated by what he saw as the failure of Al Gore and John Kerry to speak to voters’ moral concerns. Haidt shifted his research focus to political psychology and immersed himself in conservative media, subscribing to

National Review

and watching Fox News. “My reaction was constantly like, ‘Oh, I never thought of that. Oh, that’s a good critique,’ ” he says. “The scales were falling from my eyes.” He’s since carefully positioned himself as a centrist, a neutral broker who speaks with all sides.

When he taught at Virginia, the psychology department hosted a weekly lunch presentation. One day the topic was women and math. The talk focused on how cultural messages girls receive dissuade them from pursuing math. Haidt proposed an alternative explanation: “We know that prenatal hormones influence the brain, changing all kinds of interests. Is it possible that girls are just less interested in math?” There was dead silence. “Wait,” he pressed. “Do you think hormones influence behavior?” More silence. “Nobody agreed, nobody disagreed, nobody would touch it,” he recalls. “That’s when I realized our science is suffering. Social science is really hard; it’s always multiple causal threads. If several threads are banned, then you cannot solve any problem.”

In 2011, during a talk at the annual meeting of the Society for Personality and Social Psychology, Haidt asked the audience of about 1,000 people for a show of hands: How many considered themselves liberals? Eighty percent raised their hands. Centrists or moderates? About 20 hands. Libertarians? Twelve. Conservatives? Three. “When we find any job in the nation in which women or minorities are underrepresented by a factor of three or four, we make the strong presumption that this constitutes evidence of discrimination,” he said. “And if we can’t find evidence of overt discrimination, we presume that there must be a hostile climate that discourages underrepresented groups from entering.” He likened the situation of nonliberals in social psychology to closeted homosexuals in the 1980s.

His talk became a sensation. The New York Times covered it; the society’s email list lit up in debate. One post in particular caught Haidt’s attention. It was by Jose Duarte, a grad student at the University of Arizona, who argued that social psychology is so riddled with embedded ideological assumptions that a lot of peer-reviewed research might be invalid. Haidt heard from several other scholars interested in collaborating on a study of political diversity in the discipline. The resulting paper was published in 2015 in Behavioral and Brain Sciences with Duarte as lead author and five co-authors, including Haidt.

Around that time, Haidt received an email from Nicholas Quinn Rosenkranz, a professor of law at Georgetown University. Rosenkranz attached the text of a talk he’d given at an event hosted by the Federalist Society, an organization of conservative and libertarian legal scholars. (He is a member of the group’s Board of Directors.) Titled “Intellectual Diversity in the Legal Academy,” Rosenkranz’s lecture concluded that there isn’t any, claiming that he was one of three conservative or libertarian professors on the 120-member Georgetown law faculty. Liberal orthodoxy, he argued, undermines the quality of scholarship and classroom instruction. “It is a fundamental axiom of American law that the best way to get to truth is through the clash of zealous advocates on both sides,” he wrote. “And yet, at most of these schools, on most of the important issues of the day, one side of the debate is dramatically underrepresented, or not represented at all.”

Over lunch in New York, Haidt and Rosenkranz, who had not previously met, speculated about the situation in other fields. By the end of the meal, they’d agreed to form a faculty group to promote political diversity in academe. They invited Haidt’s co-authors from the journal article to join, along with Chris Martin, a graduate student at Emory University who had recently published an

article

warning about the effects of liberal bias in sociology. The first order of business was to select a name. Rosenkranz suggested “Heterodox Academy.” Duarte thought it sounded too academic, too much like a “stuffy old white man thing.” Haidt pushed back. “This is not to bring in millions of people. This is for professors.”

The Heterodox Academy website began in September 2015 with 25 members and an appeal for other tenured and tenure-track professors to join. (Membership was recently opened up to adjuncts, postdocs, and graduate students.) Libertarians and conservatives were first to heed the call. Jeremy Willinger, the group’s communications director, says that each membership application is screened, and that the group has rejected a few cranks, racists, and plagiarists. The staff recruits progressives. “We don’t want to be another conservative bastion,” says Willinger. Still, the center-right remains dominant within Heterodox Academy. According to figures provided by the group, 65 percent of members identify as conservative, centrist, or libertarian, while 18 percent are progressives. (The remaining members are listed as “unclassifiable,” “prefer not to say,” or “other.”)

Haidt believes that the vast majority of professors share Heterodox Academy’s concern over the spread of illiberal attitudes on campus, but that many are reluctant to speak up. The cause of that reluctance, he thinks, is twofold: Some liberal professors fear giving even inadvertent comfort to the right, especially with Trump in the White House and a Republican majority in Congress. Others, he argues, are intimidated by the bullying tactics of the far left.

That diagnosis rings true to David Bromwich, a professor of English at Yale. His 1992 book about the campus culture wars, Politics by Other Means (Yale University Press), is a withering assault on both traditionalists of the right and thought-policers of the left. (As John Silber wrote in a review, the book might have been called A Plague on Both Your Houses.) Asked how the current mood on elite campuses compares with that time, Bromwich says it’s at least as bad. “There is a horror of being associated with anything or anyone conservative,” he says, calling it “a mark of the timidity of the academic personality in our time. It leads to a great deal of conformity, small acts of cowardice, and the voluntary self-suppression of ideas.”

A week after Heterodox Academy began, Kate Manne, an assistant professor of philosophy at Cornell, wrote a defense of trigger warnings in The New York Times. She took specific aim at “the idea, suggested by Professor Haidt and others, that this considerate and reasonable practice feeds into a ‘culture of victimhood’ ” and described Haidt’s view as “alarmist, if not completely implausible.” Haidt responded on his blog, reiterating his objections to trigger warnings but adding that Manne’s efforts to shield her students from potentially upsetting material suggest that she’s a “caring teacher.” Manne shot back on Twitter, accusing Haidt of “uncharitably interpreting and patronizing a younger female colleague” and making “stereotypical assumptions about teachers/professors you’ve not met, nor discussed their pedagogy with.” Haidt was genuinely dumbfounded. He thought he was paying her a compliment: “But you discussed your pedagogy. I called you caring based on what you wrote.”

Haidt is accustomed to brickbats from the left, but he was caught off guard when, in December, Jarret Crawford, an associate professor of psychology at the College of New Jersey and a founding member of Heterodox Academy, posted a letter of resignation on Twitter. “In many ways, and however unintentionally, HXA has become a tool for the political right to decry and smear the left,” he wrote, using an acronym for the organization’s name. “I cannot associate myself with a group that the right, which has debased itself with its embrace of a president who would threaten liberal democracy and equal protection, has clearly begun to embrace as its own.”

Crawford’s decision was “a buildup of things,” he says, but he was especially unsettled by the emergence of another group, Professor Watchlist, whose aim, in its own words, is to “expose and document college professors who discriminate against conservative students, promote anti-American values, and advance leftist propaganda in the classroom.” (The group has since cut “promote anti-American values” from that statement.) Though Haidt has denounced Professor Watchlist, “it’s hard for me not to see it as a logical extension of Heterodox and its mission,” Crawford says. “If Heterodox is primarily a watchdog of the left on campus, then compiling names of leftist faculty members is an extension of that.”

‘What’s sacred at a university?’ Haidt asks. ‘Victims are sacred,’ he answers.

Crawford’s suggestion that Haidt and Heterodox Academy are emboldening the right dovetails with the views of other critics who see Haidt as complicit in the rise of the divisive politics he claims to abhor. “Haidt has led the campaign against political correctness, which became the mantra of the Trump movement” says Jason Stanley, a philosopher at Yale University who calls Heterodox Academy a “scaremongering rage machine” that targets “oppressed minorities who are vastly underrepresented in the academy.” To Stanley, the group also gives cover to efforts like a

recent bill

in Iowa that would require the state’s public universities to avoid any faculty hires that would cause either Democrats or Republicans to outnumber each other by more than 10 percent. (Haidt opposes the legislation on the grounds that it’s “too blunt” and would in effect “require political discrimination against qualified Democrats.”)

In his resignation letter, Crawford also singled out a tweet Haidt sent that linked to an article from the conservative news site Daily Wire about North Carolina State University’s decision to set up “conversation spaces” for students to talk with counselors about the presidential election. “How universities SHOULD respond to election: social psych to bring ppl together, not clin psych to affirm trauma,” he tweeted. To Crawford, Haidt was belittling the fears of people worried about what will happen to them because of their immigration status, religion, or country of origin.

Crawford’s criticisms hit Haidt hard. He began to question his own behavior. (He also asked Crawford to take down the letter, which he did.) “I had thought my Twitter stream was civil but provocative. Then I realized that it’s a new game. It’s one thing to be provocative when all the powers controlling universities are on the left,” he says. “But now that the presidency and the Department of Education are controlled by the right, the dangers are very different.”

He fretted over the implications of the election for Heterodox Academy, posting his thoughts in a letter to the group’s membership in January. “HxA must proceed with caution,” he wrote. “In a time of such powerful and understandable passions, it will be harder for HxA to make the case that wisdom is to be found on all sides, and from the conflict of viewpoints.” He added, “It will be easier for us to anger and alienate potential supporters.”

“When I went to Yale, in 1981, it said above the main gate ‘Lux et Veritas’: Light and Truth. We are here to find truth,” Haidt says as he paces the stage at the Students for Liberty conference in Washington. “This is our heritage all the way back to Aristotle, Plato, Socrates.” But the pursuit of truth is being supplanted by a new mission, he warns, the pursuit of social justice. He paraphrases Marx: “The point is not to understand the world; the point is to change it.”

It’s human nature to make things sacred — people, places, books, ideas, Haidt says. “So what’s sacred at a university?” he asks. “Victims are sacred,” he answers. And a victimhood culture offers only two ways to get prestige: Be a victim, or, if you can’t manage that, stand up for victims. How? “By punishing the hell out of anyone who in any way, shape, or form, even inadvertently, marginalizes a member of a victim class.”

He clicks to reveal a slide titled “The Six Sacred Groups.” “The Big 3” are Blacks, Women, LGBT. “The Other 3” are Latinos, Native Americans, Disability. The list of sacred victims, he says, is growing. Among the newly sacrosanct are Muslims, transgender, and Black Lives Matter. “I’m in no way saying these are not victims,” Haidt says. “I’m not dismissing claims of systemic racism. I’m just pointing out that the quasi-religious conflicts we have on campus nowadays tend to revolve around these groups.”

According to Haidt, the culture of victimhood is exacerbated by the arrival of an infantilized student body, especially on elite campuses. He says that cable television inaugurated an era in which news about crime filled the airwaves and magnified parental fears about the safety of their children. As a result, many of today’s college students haven’t been allowed to explore, face dangers, surmount them, and come back stronger. “Kids need thousands of hours of unsupervised time to learn how to live without their parents,” he says, “so when they go off to college it’s not the first time they’re unsupervised. They’re not getting it anymore.”

In October, Heterodox Academy released a Guide to Colleges that rates campuses on whether they’re conducive to free speech and diversity of thought. The ratings are based on a combination of factors, including whether the college has endorsed the University of Chicago’s principles on free expression. It also takes into account rankings from FIRE and from the Intercollegiate Studies Institute, a conservative group founded in 1953 by, among others, William F. Buckley Jr.

At the top of the list is the University of Chicago, where the dean of students sent a letter to the Class of 2020 stating that the university does not condone safe spaces, trigger warnings, or disinviting controversial speakers. Near the bottom of the list is Brown University, where administrators have described social justice as a “bedrock commitment” and responded to student protests in 2015 with a pledge to invest $100 million to create a “just and inclusive campus.”

Haidt hopes the rankings will lead to a schism between those universities committed to truth and those that regard social justice as the highest good, so each can go their own way and high school students would know more about the intellectual climate of the colleges they’re considering attending. To help force the issue, Heterodox Academy offers a model student-government resolution that activists can use to affirm their university’s commitment to free speech and intellectual diversity. In March, Northwestern University became the first — and thus far only — campus to pass such a resolution.

Haidt knows that, at least at the moment, Heterodox Academy provides more comfort to the right than to the left. The imbalance can make him uneasy. “The election scared the hell out of me,” he says in his office. “I’m very alarmed by the decline of our democracy.” He grabs a stack of four books from beside his keyboard. The spines read like a map of his anxious mind: The Authoritarian Dynamic, The Federalist Papers, Rude Democracy, Why Nations Fail. He is especially worried about how social media deepen our political divisions. “We are all immersed in a river of outrage, drowning in videos of the other side at its worst,” he says, predicting that our political dysfunction will soon lead to violence. “I expect hundreds to die. Things are going to get a lot worse.”

Haidt is by nature an optimist, which makes his pessimism all the more startling. His first book, The Happiness Hypothesis (Basic Books, 2006), grew out of his work in positive psychology. It offers a blueprint for a life well lived. His next book, The Righteous Mind: Why Good People Are Divided by Politics and Religion, hinges on his conviction that if we understand the biases that affect our own and other people’s moral thinking, our politics will become more civil. That belief, quaint in 2012, can seem delusional in 2017.

Haidt is fearful not only for the country but also for himself. His default intellectual style is provocation. He used to relish posing questions like, “List all the good things Hitler did,” and he even invented a game, “Racist Jeopardy,” in which he names a stereotype and asks students to identify the ethnic group it describes. “It was very uncomfortable,” he says, adding that he no longer plays the game because he’s worried about running afoul of NYU’s bias-response team. He’s already been the subject of at least two student complaints.

“I’m used to skating on thin ice, but I knew how thick the ice was,” he says. “Now I have no idea.”

Evan R. Goldstein is editor of The Chronicle Review.

from Arts & Letters Daily http://ift.tt/2tqR62y
http://ift.tt/2oQn4VC

The future is emotional

Feb 24 2018

http://ift.tt/2sYTekN

Early last year, the World Economic Forum issued a paper warning that technological change is on the verge of upending the global economy. To fill the sophisticated jobs of tomorrow, the authors argued, the ‘reskilling and upskilling of today’s workers will be critical’. Around the same time, the then president Barack Obama announced a ‘computer science for all’ programme for elementary and high schools in the United States. ‘[W]e have to make sure all our kids are equipped for the jobs of the future, which means not just being able to work with computers but developing the analytical and coding skills to power our innovation economy,’ he said.

But the truth is, only a tiny percentage of people in the post-industrial world will ever end up working in software engineering, biotechnology or advanced manufacturing. Just as the behemoth machines of the industrial revolution made physical strength less necessary for humans, the information revolution frees us to complement, rather than compete with, the technical competence of computers. Many of the most important jobs of the future will require soft skills, not advanced algebra.

Back in 1983, the sociologist Arlie Russell Hochschild coined the term ‘emotional labour’ to describe the processes involved in managing the emotional demands of work. She explored the techniques that flight attendants used to maintain the friendly demeanours their airline demanded in the face of abusive customers: taking deep breaths, silently reminding themselves to stay cool, or building empathy for the nasty passenger. ‘I try to remember that if he’s drinking too much, he’s probably really scared of flying,’ one attendant explained. ‘I think to myself: “He’s like a little child.”’

Today, the rapid shrinking of the industrial sector means that most of us have jobs requiring emotional skills, whether working directly with customers or collaborating with our corporate ‘team’ on a project. In 2015, the education economist David Deming at Harvard University found that almost all jobs growth in the United States between 1980 and 2012 was in work requiring relatively high degrees of social skills, while Rosemary Haefner, chief human resources officer at the jobs site CareerBuilder, told Bloomberg BNA in January that corporate hiring this year would prize these skills to a greater degree than in previous economic recoveries. ‘Soft skills,’ she said, ‘can make the difference between a standout employee and one who just gets by.’

Across the economy, technology is edging human workers into more emotional territory. In retail, Amazon and its imitators are rapidly devouring the market for routine purchases, but to the extent that bricks-and-mortar shops survive, it is because some people prefer chatting with a clerk to clicking buttons. Already, arguments for preserving rural post offices focus less on their services – handled mostly online – than on their value as centres for community social life.

Historically, we’ve ignored the central role of emotional labour to the detriment of workers and the people they serve. Police officers, for example, spend 80 per cent of their time on ‘service-related functions’, according to George T Patterson, a social work scholar in New York who consults with police departments. Every day, officers arrive at families’ doorsteps to mediate disputes and respond to mental-health crises. Yet training at US police departments focuses almost exclusively on weapons use, defence tactics and criminal law. Predictably, there are regular reports of people calling the police for help with a confused family member who’s wandering in traffic, only to see their loved one shot down in front of them.

In the sphere of medicine, one of the toughest moments of a physician’s job is sitting with a patient, surveying how a diagnosis will alter the landscape of that patient’s life. That is work no technology can match – unlike surgery, where autonomous robots are learning to perform with superhuman precision. With AI now being developed as a diagnostic tool, doctors have begun thinking about how to complement these automated skills. As a strategic report for Britain’s National Health Service (NHS) put it in 2013: ‘The NHS could employ hundreds of thousands of staff with the right technological skills, but without the compassion to care, then we will have failed to meet the needs of patients.’

A growing real-world demand for workers with empathy and a talent for making other people feel at ease requires a serious shift in perspective. It means moving away from our singular focus on academic performance as the road to success. It means giving more respect, and better pay, to workers too often generically dismissed as ‘unskilled labour’. And, it means valuing skills more often found among working-class women than highly educated men.

Sign up for Aeon’s Newsletter

The easiest place to see this shift is in medicine, where the overall healthcare landscape is changing to include more workers whose skills are primarily emotional. The US Bureau of Labor Statistics predicts that while jobs for doctors and surgeons will rise by 14 per cent between 2014 and 2024, the top three direct-care jobs – personal-care aide, home-health aide, and nursing assistant – are expected to grow by 26 per cent. None of these jobs requires a college degree, and together they already employ more than 5 million people, compared with the country’s 708,000 doctors.

Direct-care work is the quintessential job of the emotional labour economy. Sure, this work often demands physical strength – the ability to help a client with limited mobility bathe and get out of bed, for example. It might also call for some medical knowledge. But, as the education scholar Inge Bates at the University of Sheffield found in 2007, in ethnographic studies of direct-care trainees, the most significant skills required involve coping with filth, violence and death.

Bates studied a group of girls, aged 16, who entered a vocational training programme in preparation for work in homes for the elderly. These ‘care girls’, who had previously hoped to work with children, or in retail or office environments, were often horrified by the work. They described being hit by senile, confused residents, witnessing deaths, helping to lay out bodies, and coming into close contact with human waste. One trainee recalled finding a resident playing with her own faeces: ‘I had to scrub her hands and nails and get her nightie off and everything, and I sat her down and said, stay there, I’m just fetching your clothes, and when I came back she’d done it again and were [sic] playing with it again. You get you-know-what thrown at you … you have to learn to dodge it.’

And yet, over the course of the training programme, many of the workers came to take enormous pride in doing work that needed to be done, and that they knew many other people wouldn’t be able to handle. ‘By the second year of training, most desperately wanted to be care assistants and, when anyone got a job, it was a highly celebrated affair with a trip to the pub, even a party,’ Bates wrote.

It is becoming clear to researchers that working-class people tend to have sharper emotional skills than their wealthier, more educated counterparts. In 2016, the psychologists Pia Dietze and Eric Knowles from New York University found that people from higher social classes spent less time looking at people they passed on the street than did less privileged test subjects. In an online experiment, higher-class subjects were also worse at noticing small changes in images of human faces.

Waking to a crying baby or bathing an Alzheimer’s patient can be both gruelling and transcendentally life-affirming 

In her 2007 study, Bates also found that class background seemed relevant to the care girls’ ability to do their jobs. Those who succeeded possessed skills they’d acquired growing up in working-class families, where as girls they took part in housework, caring for children and elderly relatives, and learned to be stoic in the face of heavy demands. ‘Clearly the experience of domestic work, serving others, denying their own needs (eg for regular sleep at night, for time off on Sunday) were demands to which these working-class girls were well-accustomed by the age of 16,’ Bates wrote.

Care work is both difficult and low-paid, yet the ‘psychic income’ of doing something worthwhile offers workers alternative compensations, according to Nancy Folbre, an economist at the University of Massachusetts, Amherst. Such work, after all, is the kind we’ve traditionally expected women to do for free – out of joyful beneficence. And, as much as we should recognise the deep harm that expectation has caused, it doesn’t mean the joy isn’t real. For men and women, paid and unpaid, waking at 3am to care for a crying baby or bathing a distressed Alzheimer’s patient can be gruelling and transcendentally life-affirming all at once.

It can be hard to wrap our minds around the notion that emotional work really is work. With the very toughest, very worst-paid jobs, like working with the dying and incontinent, that might be because those of us who don’t have to do the work would rather not think about how crucial and difficult it really is. In other settings, often we simply don’t have the professional language to talk about the emotional work we’re doing. Smiling and nodding at a client’s long, rambling story might be the key to signing that big contract, but resumes don’t include a bullet point for ‘tolerates inconsiderate bores’. A lot of the time, emotional labour doesn’t feel like labour. It’s also not hard to see that highly educated, mostly male, people who develop and analyse economic policy have blind spots when it comes to skills concentrated among working-class women.

Another problem is that the question of how to help low-wage care workers make more money is invariably answered by: ‘give them a better education’. Policy designers talk a lot about ‘professionalising’ direct-care work, advancing proposals for things such as ‘advanced training’ on diabetes or dementia care. Recently, Washington, DC decided to require childcare workers to have a bachelor’s degree – a move one school-district official said would ‘build the profession and set our young children on a positive trajectory for learning and development’. Granted, anyone working with older people with disabilities, or with small children, might benefit from studying research on the particular needs of these groups; and widely accessible college education is a good idea for reasons that go far beyond vocational training. But assuming that more time in the classroom is key to making ‘better’ workers fundamentally disrespects the profound, completely non-academic skills needed to calm a terrified child or maintain composure around a woman playing with her own faeces.

The US economists W Norton Grubb and Marvin Lazerson call the belief in more schooling as the solution to every labour problem the ‘education gospel’. As Grubb argued in a 2005 talk, having more education tends to help individuals find better work, but that doesn’t make schooling a good overall economic strategy. In fact, he said, 30 to 40 per cent of workers in developed countries already have more education than their jobs demand.

So far, the most-studied effort to train people in emotional skills is the drive to impart empathy to doctors. Over the past decade, medical schools and hospitals have taken note of a broad body of literature showing that when doctors can put themselves in their patients’ shoes, it leads to better clinical outcomes, more satisfied patients, and fewer burnt-out physicians. And there’s evidence this skill can be taught. A 2014 review found that communication training and role-playing boosted medical students’ and doctors’ empathy levels in eight of 10 high-quality studies.

Providing emotional skills training to prestigious, highly-paid, and highly specialised workers might be kind of obvious. Doing the same for the rest of us is a tougher proposition. But one sign of progress is the growing focus on ‘social and emotional learning’ (SEL) for schoolchildren.

SEL programmes in the US explicitly teach students strategies for developing empathy, managing their own emotions and working with others. Kids practise using affirming language with each other, they collaboratively design rules to govern the classroom, or use mindfulness to improve their understanding of their own mental processes. Researchers are finding that such programmes help students to adopt more positive attitudes and behave in more socially appropriate ways. Many school districts have already adopted SEL programmes, and last year, eight US states announced a collaboration to develop statewide SEL standards.

But the conversation around SEL puts a glaring spotlight on the limited value we place on emotional skills. Often, the programmes are marketed only as ways to reduce violence, not methods for developing crucial human abilities. And in academic environments where testing pressures and back-to-basics rhetoric often crowd out ‘softer’ subjects, they might appeal only insofar as they encourage kids to ‘get themselves under control’ and sit still for a long-division lesson.

An 80-hour working week can make it impossible for a doctor to be truly present with the person in pain

And here’s another thing. As valuable as formal training in emotional skills might be, it’s not at the heart of what makes people successful in emotional labour. Hochschild noted that ‘surface acting’ – creating the appearance of an appropriate emotion – is harder on workers and less effective than ‘deep acting’ – really summoning up those feelings. Spontaneously expressing genuine, appropriate emotion is, presumably, even better. In 2013, the British sandwich chain Pret A Manger came under fire for using mystery shoppers to ensure that its staff appeared constantly cheery. Service workers, of course, are expected to be friendly toward customers. But Pret A Manger’s secret monitoring of its own staff, to ensure unflagging cheeriness while also depriving them of the wages and working conditions that might encourage actual cheerfulness, came across as cynical and disingenuous. Besides, having to essentially fake an emotional connection can feel exploitative in ways that even the most painful physical labour is not.

At the other end of the pay scale, David Scales, a doctor at the Cambridge Health Alliance, points out that the current focus on training physicians for empathy misses ‘the glaring deficits in the work environment, which squelch the human empathy that doctors possess’. Facing an endless stream of patients, huge financial pressure to keep visits short, and 80-hour working weeks, doctors can find it impossible to be truly present with the particular person in pain sitting before them. As Bates found in her study of British care girls, Scales suggests looking at the tension between addressing people’s most pressing needs as quickly as possible within an overburdened system and really taking the time to care for them. Having some autonomy, being treated decently and not being overstressed all the time might be the biggest keys to being an effective emotional worker.

There’s an enormous opportunity before us, as robots and algorithms push humans out of cognitive work. As a society, we could choose to put more resources into providing better staffing, higher pay and more time off for care workers who perform the most emotionally demanding work for the smallest wages. At the same time, we could transform other parts of the economy, helping police officers, post-office workers and the rest of us learn to really engage with the people in front of us.

This isn’t something our economic system, which judges the quality of jobs by their contribution to GDP, is set up to do. In fact, some economists worry that we haven’t done enough to improve the ‘productivity’ of service jobs such as caring for the elderly the way that we have in sectors such as car manufacturing. Emotional work will probably never be a good way to make money more efficiently. The real question is whether our society is willing to direct more resources toward it regardless.

Technology-driven efficiency has achieved wonderful things. It has brought people in developed countries an astonishingly rich standard of living, and freed most of us from the work of growing the food we eat or making the products we use. But applying the metric of efficiency to the expanding field of emotional labour misses a key promise offered by technological progress – that, with routine physical and cognitive work out of the way, the jobs of the future could be opportunities for people to genuinely care for each other.

Syndicate this Essay



Make a donation

from Aeon http://ift.tt/2rYQQG2
http://ift.tt/2oyO8tT