On Feeling Depressed

Feb 24 2018

http://ift.tt/2pwgVyW

There’s a difference between sadness and clinical depression. We all feel sad sometimes, possibly often, but that doesn’t always mean we are suffering from a chemical imbalance. As The School of Life points out in this video, sadness is often a rational -and normal- response to the world around us.

(YouTube link)

They offer some techniques for battling the sorrow and loneliness we feel in response to life’s circumstances. However, if you cannot identify the source of the sadness, or if it affects your everyday functioning, you should seek help for possible clinical depression. They followed up that video with another that points to anger as a possible cause of sadness.

(YouTube link)

In short, the world is horrible and depressing, but we can make it better. Our emotional response to the world can be difficult, but with understanding, we can make that better, too. -via Laughing Squid

from Neatorama http://ift.tt/2oJ6OnN
http://ift.tt/2o6fxmQ

How Do Tics Develop in Tourette Syndrome?

Feb 24 2018

http://ift.tt/2pD7FWm

Tourette syndrome is a brain dysfunction that leads to involuntary motor tics, such as sniffing, blinking, or clapping. In about 10 percent of cases, it also leads to the spontaneous utterance of taboo words or phrases, known as coprolalia. Until recently, these tics were believed to be the result of a dysfunction primarily in a brain structure known as the basal ganglia—a brain region associated with voluntary motor control, which primarily uses the neurotransmitter gamma-aminobutyric acid (GABA) to function. Recent studies of rat, monkey, and even human brains, however, has suggested that the tics stem from a more complex, system-level dysfunction that involves the cerebellum, the thalamus, and the cortex, which are all connected.

To better explore these brain regions and their influence on Tourette syndrome, Daniele Caligiore, a researcher at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council in Italy, and his colleagues created a computer-simulated model of the neural activity of a brain with Tourette syndrome. The results are published in PLOS Computational Biology.

“The model presented here is a first step of a research agenda aiming at building virtual patients, allowing us to test potential therapies by using computer simulations,” Caligiore tells mental_floss. This method can be performed at low cost, without ethical implications, and, he hopes, help develop “more effective therapeutic protocols, and suggest promising therapeutic interventions.”

Using a computer programming language called Python, Caligiore’s team built an artificial neural network model. In it, each neuron has a behavior that is regulated by mathematical equations. He explains, “Once built, the model works like a computer program—you can run it and observe its behavior.”

Caligiore reproduced the brain activity from monkey studies, published in the Journal of Neuroscience, in which an agent called bicuculline was microinjected into a region of the brain called the sensorimotor striatum that is involved in motor function. The researchers found that this microinjection of bicuculline inhibits GABA, which causes an abnormal release of the neurotransmitter dopamine.

“This excess [dopamine] might cause an abnormal functioning of the basal ganglia-thalamo-cortical circuit, leading to the production of tics,” Caligiore says. The abnormal dopamine release is one necessary condition for a tic, but it’s not the only one, he says. “To have a motor tic you need both abnormal dopamine and a background activity in the motor cortex (due to the neural noise) above a threshold.”

In other words, “it is not just a matter of dopamine or just a matter of abnormal cortical activity,” he explains. “It is a necessary combination of both.”

Caligiore’s team also found that the cerebellum appeared to influence tic production as well. Their model shows that during a tic, there is abnormal activity in a region of the basal ganglia called the subthalamic nucleus (STN). The STN connects with the cerebellum. “This is a possible reason [for a tic] because there is an abnormal tic-related activity in the cerebellum as well.”

What the computer model shows is that motor tics in Tourette syndrome “are generated by a brain system-level dysfunction, rather than by a single area malfunctioning as traditionally thought.” Studying this interaction between regions “could substantially change our perspective about how these areas interact with each other and with the cortex,” he adds.

Moreover, Caligiore’s team’s computer model is a noninvasive, ethical, and low-cost way to study these brain systems—and it certainly could be the first important step to identify new target areas for future therapies.

from mental_floss http://ift.tt/2p5t6TG
http://ift.tt/2pEV6ul

The Fascinating Way Trees Communicate With Each Other

Feb 24 2018

http://ift.tt/2op1Y0K

5371839299001

How do trees communicate with each other? Using the wood wide web, of course. Though it sounds like a Fozzie Bear joke, the wood wide web is a real thing—and trees can use it to warn each other of impending danger.

from mental_floss http://ift.tt/2nT6YXm
http://ift.tt/2pEV6ul

We suffer from “nature deficit disorder” and the accompanying pretenses of citified life. Take a cue from Hobbes, Rousseau, Einstein, Dickens, and Hazlitt: Take a hike

Feb 24 2018

http://ift.tt/2p1cmMv

Photo by goosie~gander
goosie gander

On April 14, 1934, Richard Byrd went out for his daily walk. The air was the usual temperature: minus 57 degrees Fahrenheit. He stepped steadily through the drifts of snow, making his rounds. And then he paused to listen. Nothing.

He attended, a little startled, to the cloud-high and over-powering silence he had stepped into. For miles around the only other life belonged to a few stubborn microbes that clung to sheltering shelves of ice. It was only 4 p.m., but the land quavered in a perpetual twilight. There was—was there?—some play on the chilled horizon, some crack in the bruised Antarctic sky. And then, unaccountably, Richard Byrd’s universe began to expand.

Later, back in his hut, huddled by a makeshift furnace, Byrd wrote in his diary:

Here were imponderable processes and forces of the cosmos, harmonious and soundless. Harmony, that was it! That was what came out of the silence—a gentle rhythm, the strain of a perfect chord, the music of the spheres, perhaps.

It was enough to catch that rhythm, momentarily to be myself a part of it. In that instant I could feel no doubt of man’s oneness with the universe.

Admiral Byrd had volunteered to staff a weather base near the South Pole for five winter months. But the reason he was there alone was far less concrete. Struggling to explain his reasons, Byrd admitted that he wanted “to know that kind of experience to the full . . . to taste peace and quiet and solitude long enough to find out how good they really are.” He was also after a kind of personal liberty, for he believed that “no man can hope to be completely free who lingers within reach of familiar habits.”

Byrd received the Medal of Honor for his work, but for most of us, the choice to be alone in the wild is not rewarded at all; in fact it is highly suspect. A trek into nature is assumed to be proof of some anti-social tendency. A feral disposition. Our friends and families don’t want us to wander off in search of the expansive, euphoric revelations that Byrd experienced in his Antarctic abyss. So we keep warm, instead, within our comfortable culture of monitoring and messaging. We abhor the disconnection that the woods, the desert, the glacier threaten us with in their heartless way. Our culture leans so sharply toward the social that those who wander into the wild are lucky if they’re only considered weird. At worst, they’re Unabombers. The bias is so strong that we stop thinking about that wilderness trek altogether; besides, we tell ourselves, surely we aren’t capable of such adventures. We’d wind up rotting in a ditch. And even if we could access the wild, we probably don’t have the fine kind of soul that would get something out of it.

There is something dangerous about isolating oneself the way Admiral Byrd did. Mystic euphoria aside, he nearly died there at the frozen anchor of the world. His furnace began leaking carbon monoxide into his hut. Indeed, a company of men down at his base camp had to hike in and save him when his health deteriorated. Other solitaries without radio-handy companions have been markedly less lucky. Think of young Chris McCandless (memorialized in Jon Krakauer’s book Into the Wild), who left no trail for his acquaintances when he hiked into the Alaskan wilderness with nothing but a rifle and a 10-pound bag of rice. After 119 days he died in the wilderness he had sought—poisoned by mouldy seeds is one guess—stranded, anyway, by the vagaries of Mother Nature.

In the final days of Admiral Byrd’s solo Antarctic adventure—before men from his base camp came to rescue him—he was very close to death himself. Frostbite began to eat his body, and he mumbled like a monk in his sleeping bag, at times growing so weak he was unable to move. He cradled heat pads against himself and scraped lima beans from cans. He tried to play card games and was baffled by the weakness in his arms. He tried to read a biography of Napoleon but the words blurred and swam uselessly on the pages. “You asked for it,” a small voice within him said. “And here it is.”

But despite all this trauma, Admiral Byrd was returned to society with a gift that society itself could never give him; he carried “something I had not fully possessed before,” he wrote in his memoir. It was an “appreciation of the sheer beauty and miracle of being alive . . . Civilization has not altered my ideas. I live more simply now, and with more peace.”

When Byrd and McCandless trekked into the wild, so doggedly insisting on solitude in nature, they both tapped into a human impulse that our progress has all but quashed.

When did we first step out of the wild and into the forever-crowded city? There was a time when all we had was access to nature—we were so inextricably in it and of it. Our ancestors spent their first 2.5 million years operating as nomadic groups that gathered plants where they grew and hunted animals where they grazed. Relatively recently, around ten thousand years ago, something phenomenal shifted: beginning in modern-day Turkey, Iran, and elsewhere in the Middle East, our ancestors embarked on what’s called the Agricultural Revolution. They began to manipulate and care for plants (and animals), devoting their days to sowing seeds and battling weeds, leading herds to pastures and fighting off their predators. This was no overnight transformation; rather, bit by bit, these nomads reimagined nature as a force to be contained and managed.

Or was it nature, rather, that was doing the taming? Even as we domesticated the wheat, rice, and corn that we still rely on to feed ourselves, human lives were bent in servitude to the care of crops. The historian Yuval Noah Harari calls this exchange “history’s biggest fraud” and argues that “the Agricultural Revolution left farmers with lives generally more difficult and less satisfying than those of foragers.” The historian of food Margaret Visser agrees, calling rice, for example, a “tyrant” that

governs power structures, technological prowess, population figures, interpersonal relationships, religious custom . . . Once human beings agree to grow rice as a staple crop, they are caught in a web of consequences from which they cannot escape—if only because from that moment on rice dictates to them not only what they must do, but also what they prefer.

Relying on single staples for the majority of one’s caloric intake can be a gamble, too: even while it allows for exponential population growth, the diets of individuals become less varied and more vulnerable to attack by pests and blight. Others have pointed out that, just as domesticated animals have smaller brains than their wild ancestors, the brain of the “domesticated human” is significantly smaller than that of our pre-agriculture, pre-city selves.

Meanwhile, the care of crops and animals required so much of humans that they were forced to cease their wandering ways and remain permanently beside their fields—and so we have wheat and its cousins to thank for the first human settlements.

Professor Harari notes that the plot of land around Jericho, in Palestine, would have originally supported “at most one roaming band of about a hundred relatively healthy and well-nourished people,” whereas, post–Agricultural Revolution (around 8500 BCE), “the oasis supported a large but cramped village of 1,000 people, who suffered far more from disease and malnourishment.” The Middle East was, by then, covered with similar, permanent settlements.

By 7500 BCE, our disenfranchisement from nature was expressed more dramatically when the citizens of Jericho constructed an enormous wall around their city—the first of its kind. The purpose of this wall was probably twofold: it protected against floods as well as marauding enemies. What’s extraordinary about this first significantly walled city is the almost fanatical determination to withdraw from that earlier, wild world. The wall, made of stone, was five feet thick and twelve feet tall. In addition, a ditch was constructed adjacent to the wall that was nine feet deep and almost thirty feet wide. Jericho’s workers dug this enormous bulwark against the outside from solid bedrock—a feat of determined withdrawal that would have been unthinkable to our pre-agricultural ancestors. This was a true denial of the sprawling bushland that had been our home for millennia. The “wild” had been exiled. And we never invited it back. By the fourth century BCE the Agricultural Revolution had evolved into an “urban revolution”—one we are living out still.

In 2007, it was announced that more people live in cities than not. According to the World Health Organization, six out of every ten people will live in cities by 2030. No reversal of the trend is in sight. And as the city continues to draw us to itself, like some enormous, concrete siren, we begin to convince ourselves that this crowded existence is the only “natural” life, that there is nothing for us beyond the walls of Jericho. Perhaps, goes the myth, there never was.

As the urban revolution reaches a head and humans become more citified than not, “nature deficit disorder” blooms in every apartment block, and the crowds of urbanity push out key components of human life that we never knew we needed to safeguard. Nature activists like Richard Louv use less poesy and more research to prove that cities impoverish our sensory experience and can lead to an impoverished identity, too—one deprived of “the sense of humility required for true human intelligence,” as Louv puts it.

But what really happens when we turn too often toward society and away from the salt-smacking air of the seaside or our prickling intuition of unseen movements in a darkening forest? Do we really dismantle parts of our better selves?

A growing body of research suggests exactly that. A study from the University of London, for example, found that members of the remote cattle-herding Himba tribe in Namibia, who spend their lives in the open bush, had greater attention spans and a greater sense of contentment than urbanized Britons and, when those same tribe members moved into urban centres, their attention spans and levels of contentment dropped to match their British counterparts. Dr. Karina Linnell, who led the study, was “staggered” by how superior the rural Himba were. She told the BBC that these profound differences were “a function of how we live our lives,” suggesting that overcrowded urban settings demand altered states of mind. Linnell even proposes that employers, were they looking to design the best workforces, consider stationing employees who need to concentrate outside the city.

Meanwhile, at Stanford University, study participants had their brains scanned before and after walking in grassy meadows and then beside heavy car traffic. Participants walking in urban environments had markedly higher instances of “rumination”—a brooding and self-criticism the researchers correlated with the onset of depression. And, just as parts of the brain associated with rumination lit up on urban walks, they calmed down during nature walks.

Photos of nature will increase your sense of affection and playfulness. A quick trip into the woods, known as “forest bathing” in Japan, reduces cortisol levels and boosts the immune system. Whether rich or poor, students perform better with access to green space. And a simple view of greenery can insulate us from stress and increase our resilience to adversity. Time in nature even boosts, in a very concrete way, our ability to smell, see, and hear. The data piles up.

The cumulative effect of all these benefits appears to be a kind of balm for the harried urban soul. In the nineteenth century, as urbanization began its enormous uptick, as over- crowded and polluted city streets became, in the words of Pip in Great Expectations, “all asmear with filth and fat and blood and foam,” doctors regularly prescribed “nature” for the anxiety and depression that ailed their patients. The smoke and noise of cities were seen as truly foreign influences that required remedy in the form of nature retreats. Sanitariums were nestled in lush, Arcadian surrounds to counteract the disruptive influence of cities. Eva Selhub and Alan Logan, the authors of Your Brain on Nature, have described how these efforts gave way, in the twentieth century, to the miracle of pills, which allowed ill people to remain in the city indefinitely, so long as they took their medicine: “The half-page advertisement for the Glen Springs Sanitarium gave way to the full-page advertisement for the anti-anxiety drug meprobamate.” In this light, today’s urban populace, which manages itself with sleeping pills and antidepressants (more than 10 per cent of Americans take antidepressants), may remind us of the soma-popping characters in Aldous Huxley’s dystopian Brave New World. That vision may be changing at last, though. Today, as the curative effects of nature come back to light, some doctors have again begun prescribing “time outdoors” for conditions as various as asthma, ADHD, obesity, diabetes, and anxiety.

To walk out of our houses and beyond our city limits is to shuck off the pretense and assumptions that we otherwise live by. This is how we open ourselves to brave new notions or independent attitudes. This is how we come to know our own minds.

For some people, a brief walk away from home has been the only respite from a suffocating domestic life. Think of an English woman in the early nineteenth century with very few activities open to her—certainly few chances to escape the confines of the drawing room. In Pride and Prejudice, Elizabeth Bennet’s determination to walk in the countryside signals her lack of convention. When her sister Jane takes ill at the wealthy Mr. Bingley’s house, Elizabeth traipses alone through fields of mud to be with her, prompting Bingley’s sister to call her “wild” in appearance with hair that has become unpardonably “blowsy”: “That she should have walked three miles so early in the day, in such dirty weather, and by herself, was almost incredible . . . they held her in contempt for it.”

The philosopher Thomas Hobbes had a walking stick with an inkhorn built into its top so he could jot things down as they popped into his head during long walks. Rousseau would have approved of the strategy; he writes, “I can only meditate when I am walking. When I stop, I cease to think; my mind only works with my legs.” Albert Einstein, for his part, was diligent about taking a walk through the woods on the Princeton campus every day. Other famous walkers include Charles Dickens and Mother Teresa, John Bunyan and Martin Luther King Jr., Francis of Assisi, and Toyohiko Kagawa. Why do so many bright minds seem set on their walks away from the desk? It can’t be just that they need a break from thinking—some of their best thinking is done during this supposed “downtime” out of doors.

In educational circles, there is a theory that helps explain the compulsion; it’s called the theory of loose parts. Originally developed by architect Simon Nicholson in 1972, when he was puzzling over how to make playgrounds more engaging, the loose parts theory suggests that one needs random elements, changing environments, in order to think independently and cobble together one’s own vision of things. Nature is an infinite source of loose parts, whereas the office or the living room, being made by people, is limited. Virginia Woolf noted that even the stuff and furniture of our homes may “enforce the memories of our own experience” and cause a narrowing, a suffocating effect. Outside of our ordered homes, though, we escape heavy memories about the way things have always been and become open to new attitudes.

But there does seem to be an art to walks; we must work at making use of those interstitial moments. Going on a hike, or even just taking the scenic route to the grocery store, is a chance to dip into our solitude—but we must seize it. If we’re compelled by our more curious selves to walk out into the world—sans phone, sans tablet, sans Internet of Everything—then we still must decide to taste the richness of things.

Outside the maelstrom of mainstream chatter, we at last meet not just the bigger world but also ourselves. Confirmed flâneur William Hazlitt paints the picture well. When he wanders out of doors he is searching for

liberty, perfect liberty, to think, feel, do, just as one pleases . . . I want to see my vague notions float like the down on the thistle before the breeze, and not to have them entangled in the briars and thorns of controversy. For once, I like to have it all my own way; and this is impossible unless you are alone.

This is the gift of even a short, solitary walk in a city park. To find, in glimpsing a sign of the elements, that one does belong to something more elemental than an urban crowd. That there is a universe of experience beyond human networks and social grooming—and that this universe is our true home. Workers in the cramped centre of Osaka may cut through Namba Park on their way to work; Torontonians may cut through Trinity Bellwoods Park on their way to the city’s best bookshop; New Yorkers may cut through Central Park on their way to the Metropolitan Museum; and Londoners may cut through Hyde Park on their way to Royal Albert Hall. Stepping off the narrow sidewalk for even a few minutes, we may come across a new (and very old) definition of ourselves, one with less reference to others.

Excerpted from Solitude by Michael Harris. Copyright @ 2017 Michael Harris. Published by Doubleday Canada, a division of Penguin Random House Canada Limited. Reproduced by arrangment with the Publisher. All rights reserved.

from Arts & Letters Daily http://ift.tt/2pEB5Er
http://ift.tt/2oQn4VC

Long skeptical of the value of philosophy, Silicon Valley may be coming around. “When bullshit can no longer be tolerated,” they turn to a sort-of Chief Philosophy Officer

Feb 24 2018

http://ift.tt/2okZ7Dg

Silicon Valley is obsessed with happiness. The pursuit of a mythical good life, achievement blending perfectly with fulfillment, has given rise to the quantified self movement, polyphasic sleeping, and stashes of off-label pharmaceuticals in developers’ desks.

Yet Andrew Taggart thinks most of this is nonsense. A PhD in philosophy, Taggart practices the art of gadfly-for-hire. He disabuses founders, executives, and others in Silicon Valley of the notion that life is a problem to be solved, and happiness awaits those who do it. Indeed, Taggart argues that optimizing one’s life and business is actually a formula for misery.

“I call it “the problematization of the world,” he said. “Once you start looking for this relatively new way of thinking—problem, challenge, solution, repeat—you see it nearly everywhere.” Instead of asking, How can I be more successful, he says, “It’s far more important to ask, ‘Why be successful?’”

 Instead of asking, How can I be more successful, “It’s far more important to ask, ‘Why be successful?’” 

Taggart is among a small band of “practical philosophers” entering the world of business. Serving as a kind of Chief Philosophy Officer, they summon ancient thinkers to probe eternal questions like, “How does one live a good life?” but also more practical ones like “What should my startup build?” This strand of philosophical inquiry, aided by books, blogs, and advisors, is gaining a small but intensely loyal following.

“The business community in Silicon Valley can really use philosophy,” said Joseph Walla, founder of HelloSign, who hosts other founders in his own reading group on the ancient Greek philosophy of Stoicism. “The most important thing for founders to manage is their own psychology,” he said citing a popular saying in the Valley. “The moments where [philosophy] is most helpful are during the biggest swings” in managing a startup.

A “practical” philosophy for Silicon Valley

Many in Silicon Valley have never hidden their derision for philosophy. Paul Graham, a computer scientist and co-founder of the startup fund Y Combinator, studied philosophy as an undergraduate at Cornell University because “it was an impressively impractical thing to do…like slashing holes in your clothes or putting a safety pin through your ear,” he wrote in 2007. “Most philosophers up to the present have been wasting their time,” he argued, calling for a more “useful” philosophy to help people actually trying to improve the world.

This plea for practicality would probably be Silicon Valley’s philosophical creed if it had one. And its institutional home would probably be Stanford University’s Symbolic Systems program. Conceived in 1986 by faculty seeking to educate the next generation of technology leaders, it examines how computers and humans communicate. “SymSys” fuses neuroscience, logic, psychology, artificial intelligence, computer science, and contemporary philosophy into studies that Stanford philosopher professor Kenneth Taylor calls “a 21st-century version of a liberal arts education.” (Noticeably absent are Plato and Aristotle.) Silicon Valley’s biggest names have graduated from the program—Peter Thiel, Yahoo’s Marissa Mayer, LinkedIn founder Reid Hoffman, Instagram founder Mike Krieger, among others.

 This plea for practicality would probably be Silicon Valley’s philosophical creed if it had one. 

But Stanford’s expansive vision for the philosophy has yet to reach most of the technology community, let alone the business world. A preference for lessons easily grasped during a flight from San Francisco to New York produces titles like “If Aristotle Ran General Motors: The New Soul of Business” and “life-hack” essays on Stoicism.

Still, practical philosophers like Taggart insist philosophical inquiry is the essence of an executive’s job. Philosophy, unlike other fields, offers no assumptions, just relentless inquiry. By subjecting every belief to critical reflection, Taggart’s clients start down a path of inquiry that can lead to genuine understanding, better business decisions, and, eventually, happiness. But that only happens after a painful period of reflection, which will often involve abandoning the deceptive stories we tell ourselves.

“Philosophers arrive on the scene at the moment when bullshit can no longer be tolerated,” says Taggart. “We articulate that bullshit and stop it from happening. And there’s just a whole lot of bullshit in business today.” He cites the rise of growth hackers, programming “ninjas,” and thought leaders whose job identities are invented or incoherent.

Taggart started his practice in 2010, and now works with more than 40 clients over Skype in the US, Europe, Central America, and Canada. Practitioners can charge $100 per hour, rates comparable to clinical psychologists, but Taggart lets his clients pay however much they can for sessions that can last for hours.

He asks clients to deeply examine their own beliefs, often for the first time. While psychologists aim for a therapeutic approach, philosophical counselors (who do not treat those with mental illness) focus on identifying and dispelling illusions about one’s life through logic and reason (although the line is admittedly blurry). Taggart says his clients have changed jobs, careers, and even sexual orientation. Jerrold McGrath, who founded his own consulting firm after working with Taggart, said his questions were “unrelentingly annoying” but forced him to confront the lies he was telling himself. He ultimately quit his job, prioritized his role as a father, and started a new consulting career. The process “allowed us to cut through the bullshit and see what was really going on,” said McGrath.

 “If you put Socrates in a room during a pitch session, I think he’d be dismayed at so many young people investing their time in ways that do not make the world any better. 

Philosophy remains rather unpopular among the general public. It ranks 89th on a list of the most popular college majors, and rarely appears on bestseller lists or mass media. Philosophical counseling is no different. Despite being around since the early 1990s, there are only a few hundred members, say the field’s two credentialing bodies. No government agency certifies the practice, and it is not yet reimbursed by insurance, so only a handful of people like Taggart are making a full-time living at it, admits the American Philosophical Practitioners Association.

Even in Silicon Valley, philosophy remains a largely behind-the-scenes pursuit. Stoicism, perhaps the most popular school of thought for startup founders, has only a handful of public adherents. Ryan Holiday, a former marketer for American Apparel, authored three books on Stoicism (which some scholars have slammed as “bad pop psychology….for arrogant successniks”). A second book, A Guide to the Good Life: The Ancient Art of Stoic Joy by Wright State University philosophy professor Bill Irvine, is passed among entrepreneurs in San Francisco who call it a guide for dealing with the challenges of startup life.

Irvine understands the appeal. “Stoicism was invented to be useful to ordinary people,” he says. “It’s a philosophy about defining what counts as a good life.” Silicon Valley likes to see itself on a similar mission. Google was public in leading the way here. It offers all its employees free classes through the “Search Inside Yourself” initiative, now an independent foundation devoted to encouraging focus, self-awareness, and resilience to “create a better world for themselves and others.”

But Scott Berkun, a former Microsoft manager and philosophy major who has written multiple business books on the subject, says philosophy’s lessons are lost on most in Silicon Valley. Many focus on aggrandizing the self, rather than pursuing a well-examined purpose. “If you put Socrates in a room during a pitch session, I think he’d be dismayed at so many young people investing their time in ways that do not make the world or themselves any better,” he said.

Silicon Valley’s strivers might find happiness by rethinking their definition of “success.” Stoics had something to say about this. Far from being emotionless scolds as the name suggest today, says Irvine, Stoics were early psychologists who sought to rid us of illusions that bring misery. By refocusing on what truly matters, people can find joy and purpose in their daily lives. As Irvine puts it, why “spend your life in an affluent form of misery when it’s possible to have a much simpler life that would be much more rewarding?”

from Arts & Letters Daily http://ift.tt/2ojT4yC
http://ift.tt/2oQn4VC

Is Matter Conscious? Why the central problem in neuroscience is mirrored in physics

Feb 24 2018

http://ift.tt/2obghXG

Hedda Hassel Morch in Nautilus:

ImageThe nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer—all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness. But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics? This problem, too, seems to lie beyond the traditional methods of science, because all we can observe is what matter does, not what it is in itself—the “software” of the universe but not its ultimate “hardware.” On the surface, these problems seem entirely separate. But a closer look reveals that they might be deeply connected.

Consciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information. They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. There is something that it’s like to be you, and no one else can ever know that as directly as you do. Our own consciousness involves a complex array of sensations, emotions, desires, and thoughts. But, in principle, conscious experiences may be very simple. An animal that feels an immediate pain or an instinctive urge or desire, even without reflecting on it, would also be conscious. Our own consciousness is also usually consciousness of something—it involves awareness or contemplation of things in the world, abstract ideas, or the self. But someone who is dreaming an incoherent dream or hallucinating wildly would still be conscious in the sense of having some kind of subjective experience, even though they are not conscious of anything in particular.

More here.

from 3quarksdaily http://ift.tt/2ooUcQS
http://ift.tt/2pKprYD

The Hermeneutics of Babies

Feb 24 2018

http://ift.tt/2p8RByk

Baby_Watching_FireworksCecile Alduy at berfrois:

Babies are hermeneutic subjects par excellence. When they come out of the womb, none of our dichotomies apply, not even outside and inside one’s body, day and night, me and you. And every waking hour they start interpreting the world: noticing patterns (nap then lunch, bath-book-song then sleep), contrasts (wet/dry, mom’s arms/dad’s arms, banging on a small yogurt pot/on a large one), cruxes of signifiers (mom’s endlessly changing facial expression, sounds, movement, versus the mobile above the crib), and reference points that anchor their lives into a recognizable, hospitable, shall we say human world (the doudou, mom’s smell, the blankie, soon something like “home”). As much as we try to read them, they are readers of the world: they approach the most ordinary object as a universe to explore, a mystery to decipher. Not a single object is common, because at first nothing has anything in common with anything else. Before categories exist to sap our enjoyment of the here and now by concealing everything under a name, thus creating the illusion we know them, each thing is a unique instance of just itself. So here they are, navigating a sea of ever changing information, where very little is ever the same for lack of being remembered or even consciously differentiated or apprehended as separate from a magma of other singular experiences, in a learning experiment that spans everything from what air feels like in one’s lung to the difference between liquids and solids, experiencing the world without ever naming it. Quite an immersion program, where no possible translation into any reference language or culture exists, where the very shape of space and time, the boundaries of one’s skin are still fluid. The amount of “newness” in a single minute of a baby’s day is daunting. And the rate of their epistemological adjustment is staggering.

more here.

from 3quarksdaily http://ift.tt/2oxm2KZ
http://ift.tt/2pKprYD

Ways of Knowing

Feb 24 2018

http://ift.tt/2oB2ipN

by Yohan J. John

Drawing1Once, some years ago, I was attending a talk by the philosopher Slavoj Žižek at the Brattle Theatre in Cambridge, Massachusetts. He was engaged in his usual counterintuitive mix of lefty politics and pop culture references, and I found myself nodding vigorously. But at one point I asked myself: do I really understand what he is saying? Or do I simply have the feeling of understanding? As a neuroscientist, I am acutely aware of the mysterious and myriad ways in which brain areas are connected with each other and with the rest of the body. There are many pathways from point A to point B in the brain: perhaps Žižek’s words (and accent and crazed physical tics) had found a shortcut to the ‘understanding centers’ (whatever they might prove to be) in my brain? Perhaps my feeling of comprehension was a false alarm? Had I been intellectually hypnotized?

One way to check would be to try and explain Žižek’s ideas for myself. A handy sanity check might involve directing my explanations at other people, since I knew from first-hand experience that a set of ideas can seem perfectly coherent when they float free-form in one’s head, but when it comes time for the clouds of thought to condense into something communicable, very often no rain ensues. (This often happens when it’s time for me to translate my ruminations into a 3QD essay!)

Many of us like to see ourselves as members of a scientific society, where rational people subject ideas to rigorous scrutiny before filing them in the ‘justified true belief’ cabinet. But there are many sorts of ideas that can’t really be put to any kind of stringent test: my ‘social’ test of Žižek’s ideas doesn’t necessarily prove anything, since most of my friends are as left-wing (and susceptible to pop cultural analogies) as I am. This is the state of many of the ideas that seem most pressing for individuals and societies: there aren’t really any scientific or social tests that definitively establish ‘truth’ in politics, history or aesthetics.

To attempt an understanding of understanding, I think it might make sense to situate our verbal forms of knowledge-generation in the wider world of knowing: a world that includes the forms that we share with animals and even plants. To this end, I’ve come up with a taxonomy of understanding, which, for reasons that should become apparent eventually, I will organize in a ring. At the very outset I must stress that in humans these ways of knowing are very rarely employed in isolation. Moreover, they are not fixed faculties: they influence each other and gradually modify each other. Finally, I must stress that this ‘systematization’ is a work in progress. With these caveats in mind, I’d like to treat each of the ways of knowing in order, starting at the bottom and working my way around in a clockwise direction.

1. Instinct & Intuition 

Instinct is the primordial form of knowing: we see versions of it even in single-celled organisms. It consists of the most basic tendencies, habits and drives that an organism possesses when it is born. We typically think of it as being encoded in our genome, but much of it emerges only through interaction with the environment. An amoeba, for example, ‘knows’ how to follow a chemical gradient in search of food. The seed of a plant ‘knows’ when spring has sprung, and sends out a root and a shoot in the appropriate directions. A hatchling doesn’t know who its mother it, but it ‘knows’ how to imprint, and therefore pick the most likely candidate. Newborn mammals instinctively know how to suckle. Primordial knowledge takes the form of know-how.

We seldom pause to marvel at all that a human baby knows without anything resembling teaching. Perhaps the most crucial and little-known form of baby ‘know-how’ is joint attention. There are three levels of joint attention. The first level just involves a baby and an adult looking at the same object. The adult may know what the baby is looking at, but we can’t really tell if the baby knows that the adult shares something in common with it. We know that many animals have this form of joint attention. And humans can share attention with some animals: you can point to an object and get your dog to look at it. The next level is ‘diadic’ attention. It takes the form of a conversation: the baby and the adult look at each other, and ‘exchange’ facial expressions. They direct their attention at each other. Something like this exists in many animals too: the way birds interact in song is an example that I am happily subject to at this very moment. The highest form of joint attention is ‘triadic’ attention. It happens when a baby and an adult look at each other, then look at an object, and then look back at each other. The baby and the adult often smile at the end of such an episode. This seems to be a form of acknowledgement: “I see what you’re seeing there! And I like it!”

2. Naming and description

A baby’s ability to attend to something that an adult draws her attention to is the most important stepping stone on the road to language, and therefore to wider understanding. We cannot learn the names of people, objects, and processes unless we share attention with the namer of names. This remains a mysterious process, since it isn’t always clear where the boundary of one thing ends and that of another begins. Nevertheless, we know that without the ability to comprehend naming, little or no symbolic communication would be possible.

The process of associating a name with a thing relies on the ability to divide our sensory world into discrete units. As a child grows up, it seems as if she shifts from an almost mystical holism to a state in which individual things become apparent. Once she can attend to people, animals and things she can learn their names. We typically see this as a one-way process in which an adult indicates something to the child and repeatedly says (or signs) its name. But we know that the ability to create language can arise even in children isolated from adults. Twins can devise their own idiosyncratic languages even when they have been deprived of social contact with adults.

Soon after facility with naming develops, abstraction can emerge. If a word like ‘dog’ applies to multiple distinct sensory experiences — ranging, perhaps from a Chihuahua to a Great Dane — then the word can’t be the name of a unique thing. It is a category. We take the ability to recognize abstract categories — like ‘dog’ or ‘green’ or ‘three’ — for granted, so much so that we often forget that these words are abstract categories. ‘Dogness’, ‘greenness’ or ‘threeness’ are not in any sense given to us by our sensory experience. The people who are most aware of this are those familiar with the history of artificial intelligence and machine learning. Until the recent boom in pattern recognition via artificial neural networks, machine categorization algorithms could not do as well as small children. Until fairly recently, no computer could reliably tell the difference between dogs and cats.

The ability to name objects and categorize according to abstract properties leads directly to description, which may be the most basic form of verbal understanding in adults. If two people can describe a situation or a phenomenon using roughly the same words, then we know they’re at least on the same page, even if they disagree on other matters. The concept of ‘sameness’ is, however, a subtle thing, and one we don’t necessarily understand completely. 

3. Narrative

HérosmaîtrisantunlionOnce upon a time humans discovered names and descriptions. Then we started to arrange our descriptions in time. Thus, we invented stories. The hearing and telling of stories is central to growing up, and seem to bootstrap our integration into society. Since our earliest recorded history, stories have been a crucial way for us to understand and express nature and the human condition. The spirits and gods of mythology were, among other things, animating principles that helped account for both the chaos and the order that humans discover in in the world. The interactions between these anthropomorphic forces typically took the form of stories. In the great myths that still enchant billions worldwide, these stories gradually grew into baroque narrative complexes featuring bizarre family trees, wars, curses, magical boons, reincarnations, and, eventually, moral and ethical lessons. But simply listening to a narrative may not always be enough to understand what it signifies: we may need to engage in discourse — a far more elaborate form of language use.

Before we get to discourse, we ought to pause to recognize that the murky concept of sameness or similarity crops up in narrative too. To be anthropomorphic means to be human-like. This suggests that myth-makers were capable of looking at natural phenomena — in all their inhumanity and particularity — and abstract out certain general features that they saw as overlapping with human behavior. Over time, several ancient civilizations seem to have decided that the human being was an inadequate yardstick for measuring phenomena: they gradually created more abstract conceptual entities, tying properties together in ways that wouldn’t make sense for anything human-like. Very few humans would identify themselves with descriptors like ‘omnipresent’ or ‘omnipotent’.

Modern society may disdain the mythological modes of understanding, but narrative remains central to how the general public understands science. The most widely read pop science books tends to have a narrative form: pioneering scientists might be described as heroes defeating the monsters of ignorance, or taming wild natural forces. Or the scientific ideas themselves come to be expressed in story form. Selfish genes and blind watchmakers are narrative devices: they allow us to see the universe in terms familiar even to children. We can relate to these concepts. Once again we see that the act of comparison serves as a basis for understanding.

This is not to say that narratives are always wrong or misleading (though in the case of selfish genes they are): scientists themselves often require a central narrative in order to make sense of their own results and communicate them with their peers. Regardless of how complex the tools of science become, we seem inexorably drawn towards accounts of nature that take the form of stories. Perhaps the irritation that many feel when confronted by quantum physics stems ultimately from how un-story-like it can seem. Big Bang cosmology, by contrast, has something in common with the ‘let there be light’ narrative from Genesis. And perhaps the most strikingly narrative form of understanding is be the idea that the whole universe is some kind of computer program. A program is, after all, a kind of story. An algorithm is a set of step by step instructions, executed one by one, like a plot unfolding in sequence. Digital physics seems to resonate with the opening of the Gospel of John: “In the beginning was the word, and the word was with God, and the word was God”. (Presumably the computationalists would replace ‘God’ with some kind of variable declaration or header file.)

4. Discourse

It’s hard to imagine that there was ever a human society that relied solely on narrative forms of communication. Our earliest language-equipped ancestors most likely used a wide spectrum of tools: commands, questions, pleas, exclamations, exhortations, prayers, songs and so on. We can imagine that simple forms of explanation existed from the earliest times: they must have involved versions of show-and-tell. “Here is how you chip a stone to make a sharp tool,” or “ Here’s how you start a fire.” Ordinary language is the bridge that links know-how with know-what. Overarching philosophical and scientific theories were perhaps unnecessary for simply getting by. This basic form of understanding is central to acquiring physical skills: riding a bike, playing an instrument, or even using a scientific instrument. Communicating this kind of knowledge typically requires hands-on practice and two-way interaction with a teacher. It is therefore very closely related with instinct and intuition: a gifted musician may not always be able to explain in words, even to herself, how she achieves a particular sound.

In our repertoire of questions, we have the practical-minded ‘what’ and ‘how’, but also the type of question that may be the most mischievous of all: ‘why’. It is easy to imagine some stone-age firestarter wondering why exactly hitting flint stones together produced a spark. In modern times, some scholars have attempted to explain mythology in terms of responses to such questions. Perhaps exceptionally long stories were ways for adults to deal with the endless series of whys that children are especially prone to ask? With a long enough story the questioner would eventually just fall asleep or get bored! In this light, mythology (and religion more generally) becomes a kind of failed attempt at scientific explanation, mixed with some vague desire to silence a question without actually answering it. This strikes me as a case of reading modern intentions into the minds of ancient people. When examined in detail, mythology routinely confounds modern analysis. Joseph Campbell might see Jungian psychological archetypes in myth. Others might see garbled histories, or even encrypted science and mathematics. As the religion scholar James P. Carse writes in Finite and Infinite Games, “Mythology provokes explanation but accepts none of it.”

Theories of mythology constitute a genre of explanation that exemplifies both the strengths and the weaknesses of discursive thought. Such explanations often ring true: they seem to tie together disparate notions we already believed about human psychology and history. They can often be quite inspirational, stimulating art and literature. Star Wars would not exist without The Hero With a Thousand Faces.

One-of-the-best-moments-on-colbert-report-was-when-he-coined-truthiness-in-2005.jpgBut of course even a wildly inaccurate idea can be stimulating. Perhaps these explanations arrive at intuitive ‘truthiness’ by exploiting the shortcuts to the ‘centers of understanding’ (that Žižek may or may not have discovered in me). Theories of mythology don’t come with any clear test of truth or internal consistency. Nevertheless, we employ some comparison process when we assess these theories: that’s what allows us, for example, to recognize additional supporting evidence. “Yes of course, this other myth I read also fits Joseph Campbell’s monomyth pattern.” As is well known, we seem to be much worse at seeking out falsifying evidence.

Discourse and narrative and are the primary modes of public understanding in modern society. In politics, people do not derive their beliefs from a set of logical principles, or test them in a laboratory. Even when presented with evidence that their notions might be problematic, people often dig in further, coming up with reasons why the evidence might be untrustworthy or interpreted differently. Here we encounter a fundamental aspect of understanding: it does not simply rely on agreement between an explanation and the outside world, but on subjective coherence: the agreement between an explanation and other bits of knowledge and know-how. This is true of all the ways of knowing that I list in this essay, but it becomes most pronounced here, in the domain of informal discourse.

In fact informal knowledge supplies humanity with one of its greatest barriers to understanding: common sense. Common sense allows us to readily deploy the received wisdom of society, navigating the typical challenges of life without too much difficulty (or rather, the precise amount of difficulty that common sense tells us to expect). As I mentioned earlier, different forms of knowing influence each other: this is particularly pronounced for the link between intuition and discursive common sense. Common sense understandings of the world seem to influence how we perceive the world intuitively. In this way, derived knowledge comes to seem natural, and therefore intrinsic to the world rather than to the mind and to society.

RGB_illuminationThis was vividly illustrated to me recently in the context of human color vision. It is widely acknowledged by scientists that color is not ‘out there’ in the world in the way that matter is: it is a complex product of the interaction between light and the visual system. Only this way of framing things can explain, for example, the fact that color mixing works. Beams of red- and green-wavelength light act together on the visual system to produce the sensation of yellow, but the beams of light does not affect each other ‘out there’. The red- and green-wavelength beams of light do not combine to produce yellow-wavelength light: the seeing of yellow in this case happens without any yellow light in the external world. Similarly, only this perspective explains the strange phenomenon of magenta. There is no magenta-wavelength light: the perception of magenta can only occur when blue and red light (which are on the opposite ends of the wavelength spectrum) arrive at the eyes. Magenta might best be described as ‘the absence of green’.

All of this is admittedly counter-intuitive. So much so that when I explained this on the question-and-answer site Quora, I received a barrage of angry comments. My interlocutors insisted that color was really out there in the world; a few even accused me of intentionally spreading misinformation! All this stemmed from their common sense picture of the world, which arises not just from primordial know-how, but from our social naming conventions and our discursive traditions. Most societies tend to assign properties to objects themselves rather than to the interaction between self and object. When people compare a counter-intuitive assertion (“colors are ‘in your head’”) with their background understanding (“the properties of things are in the things”), the results sometimes come out in favor of the incorrect background understanding.

The only way out of this hole is systematic thought. Unfortunately, our educational systems routinely fail to impart this way of knowing to students, so they are unable to overcome their common sense — even in situations such as color vision, where all the necessary evidence is readily available (particularly if you have a computer or smartphone).

5. Philosophy & logic

Systematic thought is easier said than done, however. Many of the earliest forms of philosophy involve deduction: deriving specific truths from general principles that are supposedly self-evident. On the diagram I drew, philosophy and logic are diametrically opposed to intuition. As far as I can tell that was a fluke, but this much is true: philosophers have always been able to use systems of thought to construct highly counter-intuitive statements, which they then place credence in (or at least claim to). Perhaps the most notorious example is Zeno’s paradox. Here is one of the forms of the paradox:

“In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.” – as recounted by Aristotle, Physics VI:9, 239b15

Zeno_Achilles_ParadoxZeno used this line of thinking to come to the conclusion that all forms of motion are illusory. Zeno used a method of proof that is still employed in philosophy — the reductio ad absurdum — to arrive at a wholesale rejection of one of the most intuitively (and experimentally) obvious things one can imagine: change itself. (One might conclude that logic should be abandoned, but this in itself may be a reductio ad absurdum.)

No doubt ancient philosophy did not always lead to nonsensical conclusions, but one suspects that this was because some philosophers relied on more than just logic: they were able to employ what we would retrospectively label experimental science. In this way they could find a happy medium between faulty intuition and faulty reasoning. Even mathematics, which to this day enjoys a nebulous status (is it pure human rationality? is in some sense a science?) allows for some forms of experimental checking. Euclid’s Elements, which may represent the high watermark of ancient reasoning based on first principles, involved proofs that could be checked using the tools of geometry. As the number of successes mounted with every check of this sort, intuition itself must have been modified in Euclid and his peers and followers, allowing them ultimately to see the postulates as self-evident.

But even philosophers who were often capable of this kind of balance were often led down the garden path by ‘reasoning’: Aristotle, for example, apparently believed that women had fewer teeth than men. I suspect that both the power and the danger of systematic thought come from its simplicity. First principles are an intellectual Swiss Army knife: they are a set of tools that are easy to carry with you, you can do quite a bit. If used carefully, formal reasoning can guide thinking quite well in a limited set of contexts. But if you try to apply it in new contexts, particularly those in which testing is difficult or impossible, there is no guarantee of accurate results. One might think that this was simply a bug in pre-scientific philosophy, but that would be a mistake. Even our most advanced and mathematically accurate sciences can lead in incorrect directions. As physicists Robert Laughlin and David Pines wrote in their paper ‘The Theory of Everything’ from 1999:

“But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods…”

When we are in the domain of the untested and unprecedented, our thought-systems often fail quite miserably. Why then, given this kind of history, do people persist in searching for overarching theories of everything?

The first and most obvious answer is that we can’t be sure what will be testable in the future. For now string theory might be metaphysical speculation, but some clever experimentalist might some day find a way to test it. But possible future usefulness does not strike me to be the primary reason we seek out all-encompassing theories. After all, philosophical and religious theories over the centuries have rarely been motivated by modern expectations of scientific testability. I think the answer lies in the following comment by Ludwig Wittgenstein:

“Remember that we sometimes demand explanations for the sake not of their content, but of their form. Our requirement is an architectural one; the explanation a kind of sham corbel that supports nothing.”

Explanations are not merely useful in a practical sense: they are also beautiful. A grand theory is a structure, and can therefore be judged according to aesthetic principles. And because one person’s meat is another’s poison, we should expect theories to proliferate in places where testing either cannot be performed or hasn’t yet been performed. Thus, near the border-territory of experimental physics we have competing cosmological theories (multiverses, modified gravity and so on), and just outside the borders we find string theory, as well as the various interpretations of quantum mechanics. Still further out we find the non-mathematical ontologies that many people feel an intuitive need for. As long as the reach of these theories exceeds their grasp, we have to assume that we prefer one over its competitors for aesthetic reasons.

Beyond aesthetics, I think a grand theoretical scheme provides a map for human knowledge and behavior. Maps are perhaps the paradigmatic example of a model. Armed with a conceptual map, it seems as if uncertainty has been banished: the map’s roads, highways and footpaths — many of them purely imaginary — link disparate domains of human experience. If one intended to travel from point A to point B, a map would tell you what to expect along the way. Avoiding surprise in this way appears to be one of the central goals of learning. One popular theory of ‘computational aesthetics’ argues that ‘interestingness is the first derivative of beauty’: we seek out new things not simply for how they make us feel, but because they promise to improve our existing categories of experience, thereby reducing surprise in the future. So, perhaps paradoxically, seeking novelty is how organisms attempt to conquer it.

Grand but untested theories are strange in this regard, because we can’t really use them to navigate the external world. However, we can use them to navigate the internal world: our previously-existing knowledge. A systematic theory of the universe is a mnemonic: a conceptual filing system for locating information in one’s own head, if nowhere else. Perhaps this is the sense in which philosophy is seen as a form of therapy. Philosophy tends to leave the world as it is… but it does not leave the philosopher as she is. Those seeking grand theories must be the neat freaks of the intellectual world: they seek a place for everything and everything in its place.

6. Qualitative science

I’ve already touched on science in the previous segment, but it really only comes into its own when it goes beyond philosophical and even logical theorizing. Frustratingly, the differences between science and other forms of understanding are rarely understood even by educated non-scientists. People who come up with crank science seem to view knowledge solely in terms of aesthetic principles and intuitive intelligibility, rather than in terms of alignment between explanation and experiment.

All science (and all experience) starts out with subjective qualities. When groups of people agree on these qualities, they can commence the quest for invariants: the events that recur in multiple contexts. Knowing the contexts that predict the recurrence of events is the basis for all science and technology. The earliest of these events to be recognized may have involved the interlocking cycles of days and months and seasons.

It can seem as if quantification is essential to science, but this is not always the case. The best example of this is Charles Darwin’s theory of evolution by natural selection. It arose well after the scientific revolution ushered in by Isaac Newton, but it was far less quantitative than the physics of the time. No doubt Darwin investigated how traits change in distribution in a species from one generation to the next, but for the purposes of his theory, only very rough numbers were needed to make the case. Darwin’s theory requires us to believe four eminently reasonable ideas:

  1. Organisms inherit some of their traits from their parents.
  2. Variation of heritable traits arises.
  3. New traits affect the fitness of the organism with respect to its environment
  4. The fitness of an organism affects the relative number of offspring it produces compared to competitors.

If one accepts these postulates, then one really ought to assent to the idea that natural selection will lead to the proliferation of particular traits in a population. Prior awareness of human heredity and selective breeding in plants and animals helped many people get over their prior common sense notions of unchanging design or the inheritance of acquired traits. Once fellow scientists were able to wrap their heads around the theory, they could join Darwin in searching for further evidence: transitional fossils and common features in related species. Assessing the evidence relies on intuitive notions of similarity — notions that we still haven’t made completely explicit even in the 21st century. Regardless, we can see that qualitative theories clearly allow for prediction, and therefore the feedback and self-correction that mark the best examples of modern science.

7. Quantitative science

This aspect of science is so well known that it barely requires further elaboration. When the qualities recognized through perception are aligned with external measuring devices, we turn the process of comparison into an objective act that can be performed by almost anyone. The acts of measuring and counting bring mathematics into contact with qualities, creating the ‘hard’ sciences. Mathematical prediction involves a translation process. Qualities perceived in an experiment become associated with measurable quantities, and then represented as abstract symbols. These symbols are arranged in mathematical statements, which are manipulated using the laws of science and mathematics (and, as Laughlin and Pines point out, a touch of art), resulting in new mathematical expressions. The abstract symbols are then replaced by known quantities, allowing us to predict the values of unknown quantities. 

It is important to recall that mathematical prediction predates the scientific revolution. Ancient peoples armed with rudimentary mathematical tools were capable of predicting the movements of the sun, the moon, and the visible planets. But their methods were somewhat ad hoc: Ptolemaic epicycles were a form of curve-fitting. They could be used to approximate any periodic movement, but they didn’t suggest a unifying physical principle. Newton’s physics provided a starting point for a new way of thinking: objects in space and objects on earth might appear to behave very differently, but they were similar in that they obeyed the same mathematically-defined physical laws.

A physical law captures a regularity in the universe: an analogical relationship between measurables, often linked with each other through hypothetical or not-yet-measured entities. One might ask why these mathematical tools work at all. In the 20th century, one famous physicist, Eugene Wigner pondered ‘The Unreasonable Effectiveness of Mathematics in the Natural Sciences‘. What gives mathematics the power to align with the physical world? Why is it possible to predict aspects of the world using symbols that seem so different from the things they represent?

8. Models and simulations

Prima_Europe_tabulaThe question of the effectiveness of mathematics seems to defy any easy intuitions, and remains a topic of speculation. But another tool of science rarely evokes the same sort of chin-scratching: the model or simulacrum. As I mentioned earlier, a map may be the paradigmatic example of a model. It has a structural relationship with the thing it represents. This relationship is so intuitive to most people, even children, that we might find it funny if a cartographer were to write a paper called ‘The Unreasonable Effectiveness of Maps in the Geographical Sciences’. Maps and models appeal directly to our intuitive sense of similarity, and for this reason rarely call out for explanation in terms of our other cognitive tools. The simplest models also obviate the need for an elaborate theory of causality. When you see how clockwork functions, any further explication seems superfluous. I remember putting together a Lego Technics car as a child: it had a working steering wheel and an engine with moving pistons. I don’t think a verbal explanation of what was happening would have added much to my intuitive feeling of comprehension.

When I was ruminating over the content of this essay, I thought of this list of ways of knowing as a chronological series, with computational models of the sort that I do being the newest tool in the toolbox. Compared to the tools available to Newton, a computational model can seem like a qualitative leap. But I quickly realized that models as such have been around for quite a while. And not just maps. Ancient and medieval architects, for example, did not build large structures through full-scale trial and error: they experimented with smaller models or maquettes. Even more elaborate physical models existed in antiquity: the Antikythera Mechanism, a kind of clockwork analog computer, depicted the movements of the planets in a way that could be directly compared with observation.

Modern computational modeling — of weather systems, economies, biological neural networks and other complex systems — represents a coming together of various other forms of understanding. They combine qualitative and quantitative scientific observations, but do not simply spit out measurable predictions. They make the task of comparison between quantities and reality easier by translating the quantities back into qualities. A weather model might produce something that looks like real satellite imagery. A neural model might produce something that looks like the electrophysiological recordings from a real brain. The similarity can be quantified to assess how well the model fits reality, but in the case of a truly successful model, this can seem unnecessary: the output of the model and the experimental measurement can be superimposed on each other. In other words, the output of the model is virtually indistinguishable from the outcome of a corresponding experiment.

Modern computational models often attempt to go beyond mere surface similarity (which might be dangerously close to ‘cargo cult science‘). In the case of my field, neuronal modeling, this might amount to creating a model of a brain region (or the whole brain) that is composed of model neurons connected just as real neurons are in the brain. The goal of such modeling is not simply to fit the ‘global’ phenomenon uncovered by an experiment (which turns out to be relatively easy), but to predict (without fixing parameters each time) what will happen in new situations, or when individual parts go wrong. Thus the ideal neuron-based model of the brain would show how particular injury or genetic disorders might affect psychology or behavior, and then also show how doctors might treat the symptoms or even restore the brain to its earlier state. A detailed model resembles the actual phenomenon at multiple levels of investigation.

No model of any complex system has reached this level of alignment with its target. It may simply be a matter of time, but it may also be a strange consequence of the nature of modeling. A model, whether physical or computational, is itself a physical phenomenon: it does not exist in some Platonic realm. As we make a simple intuitive model more complicated, it may no longer be intelligible. When small and seemingly simple things are put together, complex emergent behavior often arises. Thus the paradox of modeling — for many people the most intuitively satisfying form of understanding, and therefore ‘adjacent’ to intuition in my diagram — is that our very attempts to make a model accurate take it out of the realm of understanding, and push it into the realm of the phenomena in need of explanation. As Paul Valery said, “Everything simple is false. Everything complex is unusable."

Drawing1

The ‘ways of knowing’ diagram makes intuition seem like just one of several modes of knowledge. But intuition is more than that. It serves as the connective tissue linking all the other forms: it seems always present in the background, even when we engage in highly symbolic forms of reasoning. Consider linguistic knowledge. Let’s say you’re asked to speak extemporaneously about something you know a lot about. Where do your words come from? For most people I suspect they just emerge out of nowhere. There is no consciously perceptible ‘staging zone’ where words are prepped before being ejected from the mouth.

Instinct and intuition suffuse all other forms of knowing. Our basic ability to compare our knowledge, and the actions stemming from our knowledge, with the world and with the body seem to occur in this domain, just on the fringes of conscious experience. No diagram of ways of knowing can capture this, because we still don’t know how it works. I now think that what I have represented is merely the conscious shadow of knowing: the aspects of understanding that we can represent consciously to ourselves and therefore communicate with each other. Intuition serves more as a placeholder than a truly fleshed-out concept. We know it’s there, but we don’t really know how it develops, when it’s right, and when it’s wrong.

The suspicion I felt when I experienced ‘truthiness’ during the Žižek talk might simply be a necessary way of dealing with the fact that there are two modes of knowing: explicit and implicit. It seems as if intuition’s role is to point the conscious mind in the direction of potential discovery. The role of the conscious mind in turn is to ensure that intuition is well-trained.

Unlike the other points on the octagon, intuition cannot be taught or imparted directly. At best, we can show people how to do things as we do, and hope that eventually their intuitions align without ours. This is what happens eventually in the case of science and mathematics education, but it is also a guiding spirit in art, music, cooking… every technique that humans are capable of learning. Our explicit ways of knowing grow out of an unconscious reservoir of implicit knowledge, all the while modifying it and being modified by it. Our conscious ways of knowing therefore seem to represent the visible tip of an iceberg of unknown depth. Perhaps to be truly educated, regardless of the field, is to regularly traverse the loop from instinct to explicit knowledge and back again. Only very rarely can the conscious will translate implicit knowing into explicit knowledge. But then again perhaps it doesn’t even need to: perhaps wisdom consists in accepting that there are forms of knowing that can only become manifest when the will recognizes its limits.

 

——

The diagram was drawn in Inkscape. Other images were taken from Wikipedia.

 

 

from 3quarksdaily http://ift.tt/2ppjFOA
http://ift.tt/2pKprYD

Be your selves: We behave differently on different social media

Feb 24 2018

http://ift.tt/2p900Sy

Derek Thompson in The Economist:

JackA friend who stumbled upon my Twitter account told me that my tweets made me sound like an unrecognisable jerk. “You’re much nicer than this in real life,” she said. This is a common refrain about social media: that they make people behave worse than they do in “real life”. On Twitter, I snark. On Facebook, I preen. On Instagram, I pose. On Snapchat, I goof. It is tempting to say, as my friend suggested, that these online identities are caricatures of the real me. It is certainly true that social media can unleash the cruellest side of human nature. For many women and minorities, the virtual world is a hellscape of bullying and taunting. But as face-to-face conversation becomes rarer it’s time to stop thinking that it is authentic and social media are artificial. Preener, snarker, poser, goof: they’re all real, and they’re all me.

The internet and social media don’t create new personalities; they allow people to express sides of themselves that social norms discourage in the “real world”. Some people want to lark around in the office but fear their boss will look dimly on their behaviour. Snapchat, however, provides them with an outlet for the natural impulse to caper without disturbing their colleagues. Facebook and Instagram encourage pride in one’s achievements that might appear unseemly in other circumstances. We may come to see face-to-face conversation as the social medium that most distorts our personalities. It requires us to speak even when we don’t know what to say and forces us to be pleasant or acquiescent when we would rather not.

But how does the internet manage to elicit such different sides of our personality? And why should social media reveal some aspect of our humanity that many centuries of chit-chat failed to unearth?

More here.

from 3quarksdaily http://ift.tt/2pmeH59
http://ift.tt/2pKprYD

The art of dream interpretation was long dominated by religious approaches. Then came the rationalists, philosophers, poets, and psychologists

Feb 24 2018

http://ift.tt/2oIao43

Alicia Puglionesi explores a curious case of supposed dream telepathy at the end of the US Civil War, in which old ideas about the prophetic nature of dreaming collided with loss, longing, and new possibilities of communication at a distance.

telepathy magnetism
Illustration from Edwin D. Babbitt’s The Principles of Light and Color (1878) — Source.

In the predawn of July 21, 1865, a young man in Cambridge, Massachusetts, woke from a deep sleep with a strange phrase on his lips. “What they dare to dream of dare to die for”, he recalled saying to himself, before slipping back into unconsciousness, wondering dimly “whether [the words] really expressed a lofty thought, or were lofty only in sound.” Later that day, the man, who gave his name as Mr. W., was surprised to hear the line delivered from a podium by the famous poet James Russell Lowell, at a Harvard commemoration ceremony for students killed in the Civil War. Lowell’s version replaced “die for” with “do” in order to fit the rhyme scheme, a discrepancy which left Mr. W. pondering “whether I liked his sentiment or mine the most”.1 Decades later Mr. W. still recalled the coincidence vividly enough to submit his account to Harvard psychologist William James, “hoping that these reminisces may be amusing to your society” – that is, the American Society for Psychical Research (ASPR), a group devoted to collecting and studying supernormal mental phenomena. Self-deprecation aside, Mr. W. clearly saw more than mere amusement in this incident that he continued to ponder for so many years. Had a force more powerful than random chance actually transmitted Lowell’s as-of-yet unknown verse to Mr. W. while he slept?

Dreams so often feel trivial and momentous at the same time. They hold the promise of revelation if only one could pin down a face, a location – like the common dream of words on a page that dissolve just before their meaning registers. The history of dream interpretation is almost as slippery as dreams themselves; the practice spans cultures, but manifests in highly-specific forms, shaped by particular ways of understanding the relationship between mind and world. In religious texts, dreams usually predict the future, like Jacob’s ladder or Pharaoh’s dream of famine in the book of Genesis. In their revelatory capacity, they can unite believers with the divine and connect members of faith communities. In other settings dreams can shed light upon past events, and even have legal standing, as in the famous “Greenbrier ghost” case of 1897, where a murdered woman appeared in her mother’s dreams to out her former husband as the killer.2

blake jacobs ladder dream
Jacob’s Dream by William Blake, 1805 — Source.

A major transition occurred in the eighteenth and nineteenth centuries, when religious, spiritual, and symbolic modes of dream interpretation were challenged by rationalist accounts that explained dreaming as a product of mental mechanisms. British philosopher David Hartley, in his 1749 Observations on Man, listed three mechanistic causes of dreaming that would become standard: “First, the impressions and ideas lately received, and particularly those of the preceding day. Secondly, the state of the body, particularly of the stomach and brain.” The third source of dream content was “association”, a law by which sensations and responses became emblazoned in the brain, which many Enlightenment thinkers proposed as the basis of learning and memory. During sleep, Hartley claimed, the mental law of association continued to operate, but without any sensory input and without the rudder of reason steering it. Thus unmoored, the brain “carried on from one thing to another” at the mercy of bodily fluctuations.3

The work of natural philosophers like Hartley signaled a concerted Enlightenment effort to debunk centuries of mysticism around dreams: American iconoclast Thomas Paine proclaimed in an 1807 essay that Biblical prophecies were merely “riotous assemblage[s] of mis-shapen images and ranting ideas”, exploited by power-hungry priests to delude the people.4 His case never quite won out over the old mystical mode – instead, the two commingled, with natural and supernatural, medical and moral interpretations of dreams feeding into each other throughout the nineteenth century. In Charles Dickens’ A Christmas Carol (1843), when Scrooge announced to Marley’s ghost, “You may be an undigested bit of beef, a blot of mustard, a crumb of cheese,” he echoed Hartley and the medical authorities of his day.5 However, Dickens played it both ways, since Scrooge’s astral travels were also very real. Dickens demanded that Scrooge, and the reader, open their hearts to spiritual riches beyond the material realm of mere mustard and money.

dickens christmas carol ghost
John Leech illustration depicting Marley’s ghost appearing to a postprandial Scrooge, from A Christmas Carol in Prose by Charles Dickens, 1843 — Source.

Dickens believed that dreams could tell us something meaningful, even if the message was mediated by physiology, and the broader public seemed to agree. Fortune-telling guides could assert a divine origin while also noting that “dreams which persons have in the beginning of the night, especially if they eat heavy or solid suppers, are not so much to be depended on.”6 Knowledge of physiology’s impact on the mind made it all the more fascinating to parse out where one ended and the other began. Anecdotes like Mr. W’s flooded the American Society for Psychical Research throughout the 1880s and 90s; they circulated in magazines, newspapers, and journals, where letter-writers were just as keen as ever to discuss nocturnal visions. More than in times past, they were likely to take note of bodily states, and to speak of the mind as in some ways mechanical: an “intellectual apparatus”, a clock, engine, or camera. The new scientific psychology did not conquer older understandings of dreaming, but it helped to re-shape the language of dreamers.

The amusement Mr. W felt about scooping the great poet James Russell Lowell was mixed with a genuine conviction that he received the line telepathically during a liminal state of consciousness. Psychologists speculated that thoughts had a basis in invisible vibrations or waves that acted upon the brain; if these waves did not respect the boundaries of the individual skull, they might communicate from one mind to another. In nineteenth-century theories of mind, sleep was a prime time for mental permeability, when the barriers of reason and attention dissolved.

Lowell himself inadvertently supported such a model of mental permeability with his account of the poem’s origins. In the days leading up to the commemoration ceremony, he said he was “hopelessly dumb”, stricken with writer’s block. Composing an ode to the Civil War dead in the summer of 1865 was an intimidating task even for a poet of Lowell’s stature. The Confederate army surrendered at Appomattox Courthouse only three months before, and Lincoln’s assassination deepened the nation’s misery just as relief was in sight. An estimated 750,000 soldiers, more than two percent of the American population, perished on the battlefield or in the ruins. Lowell’s mind was strained to its limits by the difficult task at hand, but also by the general despair of a country awash in its own blood. It would not surprise psychical researchers who studied such states that the Ode manifested “in a flash of sudden inspiration” that felt to Lowell like an outside force working through his pen.

James russell lowell in his study
James Russell Lowell in his study, detail from an image featured in Horace Elisha Scudder’s James Russell Lowell: A Biography (1901) — Source.

“It all came with a rush,” Lowell recalled, “literally making me lean and so nervous that I was weeks in getting over it.”7 The visceral impact of his effort – he claimed that he lost ten pounds – was intertwined with the psychic element, nerves frayed by the intensity of the message they conducted. Lowell reported writing most of the piece between 10 p.m. and 4 a.m. on the night before the ceremony,8 supporting neatly the idea that Mr. W.’s strange experience occurred at the peak of Lowell’s emotional labor. Perhaps the spirit of the Ode was literally in the air, like electricity, magnetism, and the other invisible forces captured by nineteenth century physics.

If so, the invisible force wringing Lowell’s nerves bore a distinctively national imprint. As a poet, he was selected to channel the passions of his less-articulate fellow citizens in the wake of a devastating war. Even if the night of July 21st was not quite as revelatory as his account makes it out to be, he told the story of the Ode’s creation the way he did to make his channeling function explicit. It would make sense to contemporary readers that the psychic energies of a nation might overwhelm a medium’s capacity and ricochet to other, lesser receptacles; for instance, the innocent Mr. W. sleeping nearby in Cambridge.

telepathy
Diagram on telepathic communication, featured in Psycho-therapy in the Practice of Medicine and Surgery (1903) by Sheldon Leavitt.

Based on personal details provided in his letter, we can identify Mr. W. as Charles Pickard Ware, an English teacher known for publishing a collection of American slave songs in the 1870s.9 Harvard philosopher Josiah Royce, who presented Ware’s story in the Proceedings of the American Society for Psychical Research, diagnosed a case of “most beautiful pseudo-presentiment”. He thought that Ware’s unconscious mind, whipped into a heightened emotional state during Lowell’s delivery of the “Commemoration Ode”, simply conjured a false memory of the previous night’s dream.10 Royce set out to debunk Ware’s suggestion of telepathy with a psychological argument about how the mind works. “Pseudo-presentiments”, he explained, could be “created or reinforced by dreams” that occurred after the supposedly-foreseen event. “By an instantaneous and irresistible hallucination of memory”, the agitated mind projects the dream into the past “so as to make it seem prophetic, or at least telepathic.”11 Royce placed this case on a continuum of susceptible mental states ranging from the temporary unreason of sleep to the elation of poetry to paranoiac delusions, all of which could produce astonishing experiences like Ware’s.

It’s unlikely that Ware was asking to be compared to a paranoid asylum patient when he sent his recollections to the ASPR. Though he didn’t name what happened to him, he strongly insinuated that a form of telepathy or clairvoyance was at work – a real, rather than a deluded, communion. This was certainly at odds with Royce’s pathological framing, yet the two sides of the debate both claimed a methodology rhetorically aligned with the spirit of modern science. Both tried to elaborate a set of law-like, rational processes that eliminated the need for any supernatural agency. Advocates of telepathy saw thought as a material or energetic substance that could travel unconsciously between minds, similar to the behavior of electricity or magnetism. Royce took the position increasingly shared by psychologists in the later nineteenth century; they agreed that thoughts emerged from some wave- or electricity-like process acting on brain cells, but they restricted this process to the individual’s brain. Supernormal experiences arose from errors in our skull-encapsulated machinery, not from outside signals.

Despite skepticism from some scientists, people took the idea of spontaneous, unconscious mental transmission quite seriously, as a possibility and as a danger, in an age when powerful ideas crisscrossed the nation through new and mysterious channels. From mass print to the telegraph to the railroad, burgeoning communication systems collapsed time and space through increasingly rapid connections. They brought unprecedented economic growth, creating new forms of investment and trading that depended as much on information flow as they did on the movement of commodities. Such precipitous connectedness also posed a threat to the socio-economic order: it allowed laborers to organize, abolitionists and suffragists to rally. Dangerous ideas could spread uncontrollably, and many worried that hardware might not limit their range.12 The line between technology and telepathy blurred, with medical men like William Carpenter explaining the nervous system as a telegraph and extending its reach beyond the individual body; he believed that “nerve-force,” as a form of electricity, could “exert itself from a distance, so as to bring the Brain of one person into direct dynamical communication with that of another.”13 This popular analogy turned the country into a literal body politic that could succumb to hysteria or mass frenzies originating with a single disturbed citizen.

telegraph civil war
A man fixes telegraph wires during the US Civil War, ca. 1863 — Source.

Much of the American medical and scientific work on this topic appeared in the 1870s, which meant that its authors shared a common point of reference for mass psychic upheaval: the Civil War. For instance, Philadelphia physician and novelist Silas Weir Mitchell wrote about mental permeability and contagion in a decidedly negative light: if people could see each others’ true thoughts, “the whole fabric of civilization would crumble”, one of his characters warned.14 During the war, Mitchell served in the Turner’s Lane army hospital, where he studied the lingering effects of nerve injuries through intimate conversations with men who had actually seen civil society crumble around them and who had fought in its ruins. In his subsequent decades of practice, Mitchell urgently counseled patients and the public on shoring up their nervous power to defend against the intrusion of alien psychic forces. Though he denounced psychic mediums as frauds, Mitchell wrote so incessantly about mental barriers and scenarios of penetration that his concern about volatile powers of mind is unmistakeable. Perhaps, like Walt Whitman, another Civil War medic, Mitchell was plagued by “dreams’ projections”, sentenced each night to “thread my way through the hospitals”.15 What war reporter Ambrose Bierce called a “spiritual darkness”,16 unleashed over four years of brutal killing and halted only with fragile documents, left a generation wary of the violence lurking below the surface of American life.

Others, however, saw great potential in harnessing the mental forces that stirred men from remote villages to take up arms. During the war, dreams had joined people to a common cause in very tangible ways. They appeared, of course, in political speeches and poems – Whitman again, with his imperiled “dream of humanity, the vaunted Union”.17 Dreams and premonitions forged a meaningful connection between civilians, soldiers, and the distant leaders whose choices determined their fate – especially Abraham Lincoln, who traversed the dreamscape of Civil War America as restlessly as he stalked the White House’s corridors. But dreams also pervade private letters, journals, and newspaper reports of more intimate encounters. People felt linked with imperilled loved ones through moments of clairvoyance, premonition, or dream communion.

civil war soldier dream
The Soldier’s Dream of Home, a Currier and Ives print, ca. 1861–1865 — Source.

For instance, on a July night in 1863, Sarah Oates saw a vision of her son John, wounded and dying.18 Her dream extended even to his burial, giving her a chance to be virtually present as the young man was lowered into an unmarked grave. The grave was in Gettysburg, Pennsylvania, hundreds of miles away. Two days later, the letter arrived telling of John’s fatal injury, but Sarah had already gone into mourning. Countless families treasured and circulated such stories; they acted as a virtual salve, helping bend violent death on the battlefield towards the nineteenth-century ideal of a “good death” which occurred at home surrounded by loved ones. Psychic manifestations of familial love in times of crisis suggested that these invisible bonds might transcend even a war of brother against brother.

By stretching love and intimacy across vast distances, and by troubling the presumed boundaries of the self, the act of dreaming remained more than a physiological function of the brain during sleep. From its earliest links with prognostication, dreaming connoted an expectant gaze towards the future, a hope (or promise, or fear) for the waking world. The Enlightenment’s dismissal of divine prophecy changed the register in which dreams spoke to the collective destiny of families, communities, and nations. Dreams became a metaphor, but one still linked to interiority and revelation. In the satirical device of illustrating a politician’s nightmare, we see a Thomas Paine-like jab at the persistence of superstition.

abraham lincoln dream
Abraham’s dream! — “Coming events cast their shadows before”, a Currier and Ives print portraying the President tormented by nightmares of defeat in the election of 1864 — Source.

Yet the cartoonist who lampooned Abraham Lincoln as a trembling dreamer worked within the tradition of casting the president as a sort of secular prophet. Only a few years before the start of the Civil War, artist Louis Maurer produced a decidedly un-ironic image of George Washington dreaming of liberty and justice in the midst of his generation’s war for national independence – not an idle wish, but the hoped-for outcome of waking struggles shared by mighty generals and humble soldiers. Representations of enslaved African Americans showed them suffering the torments of this dream’s constant denial. Equals in mental mechanism must be equals in their capacity to imagine a better life, and thus, in theory, equals in democratic politics.

george washington dream
Washington’s Dream, a Currier and Ives print, ca. 1857 — Source.

Pitched warfare over this very claim had barely ended when Charles Ware woke abruptly, before sunrise, on a friend’s couch in Cambridge, where he’d traveled to attend a memorial service for his former classmates. “What they dare to dream of dare to die for” arrived, like a telegram, instantaneously and pre-formed, piercing his slumber. At the time, Ware was still young and harbored literary aspirations. Though he never succeeded as poet, perhaps his moment of communion with the renowned James Russell Lowell affirmed a private sense that Ware, too, had access to a higher plane of inspiration. The less we think of Ware as gifted or inspired, however, the more his experience appeals to a broader notion of dreaming as a perhaps too-democratic pathway for shared sentiment and action. The violent upheaval of war seemed to demonstrate that the stirrings of individual hearts were highly contagious. The line in question, “And what they dare to dream of, dare to do”, refers to the collective political dream that led Harvard’s undergraduates into battle. Whether people decided to classify such dreams as symbols, portents, or mechanical blips, they took on an indisputable, concrete reality for many nineteenth-century Americans, and their power to manifest in catastrophic ways gave skeptics pause.

Anecdotes of dream communion represent a kind of “dream work” distinct from the later Freudian sense of the term: dreams “worked” in the narrative context of people’s lives to account for sometimes-mysterious shared experiences. At the same time, much of the reading public followed current psychology and viewed dreaming as a physiological activity of the brain – the result of electrical or etheric impressions moving along telegraph-like circuits.19 This left ambiguity as to where the boundaries of the unconscious fell – was it restricted to the individual mind, or part of a larger psychic economy? Lowell’s “Commemoration Ode” appeared to leap the divide between two strangers, confounding distinctions of authorship. Despite scientific efforts to rationalize and individualize dreaming, it retained its old portentousness in an hour when the need to unify diverse citizens was an overbearing anxiety.

Whether through science or art, dreams proved difficult to regulate for practical aims. Lowell could testify to this challenge. While one line of his poem made a profound impression on Charles Pickard Ware, the full version was a flop with his audience at Harvard, and nobody bothered to reprint it in the newspapers. Despite a profusion of theories, neither poets nor psychologists seemed any closer to mastering the laws of dreaming, inspiration, and sympathy.


Alicia Puglionesi holds a PhD in the History of Science, Medicine, and Technology from Johns Hopkins University. She is currently an NEH Postdoctoral Fellow at the Consortium for the History of Science, Technology, and Medicine in Philadelphia. Her essays from various corners of the web can be found here.


1. Letter from “Mr. W” quoted in Josiah Royce, “Report of Committee on Phantasms and Presentiments”, PASPR 1:4 (1889) 373.

2. “The Greenbrier Ghost”. Wvculture.org. N.p., 2017. Web. 5 Apr. 2017.

3. Hartley, David. Observations On Man. 1st ed. London: Printed for Leake & Frederick, 1849.

4. Paine, Thomas. “An Essay on Dreams“, in The Great Works Of Thomas Paine. 1st ed. New York: D. M. Bennett, 1878.

5. Dickens, Charles, John Leech, and Daniel Maclise. A Christmas Carol In Prose. 1st ed. New York: Walter J. Black, Inc.

6. The New Dream Book, Or, Interpretation Of Remarkable Dreams. 1818. 1st ed. Boston: Printed for Nathaniel Coverly.

7. Quotation from Lowell to James B. Thayer, Jan. 18, 1886, in Letters of James Russell Lowell, Volume II, ed. Charles Eliot Norton (New York: Harper & Brothers, 1893), 10.

8. Horace Elisha Scudder, James Russell Lowell: A Biography (Cambridge: Riverside Press, 1901), 63-65. On the metaphor of male creative production as childbirth, see Sherry Marie Velasco, Male Delivery: Reproduction, Effeminacy, and Pregnant Men in Early Modern Spain (Nashville: Vanderbilt University Press, 2006) and Michael Davidson, “Pregnant Men: Modernism, Disability, and Biofuturity in Djuna Barnes.” Novel 43.2 (2010): 207–226.

9. 1862-1912 Class Report: Class of ’Sixty-Two Harvard University, Fiftieth Anniversary, ed. by Charles Pickard Ware (Norwood, Mass: The Plimpton Press, 1912). Hamilton Bail also identifies Mr. W. as Ware (Bail, “James Russell Lowell’s Ode,” 177).

10. Josiah Royce, “Report of Committee on Phantasms and Presentiments”, PASPR 1:4 (1889): 373. Royce personally attested to Mr. W.’s character, noting that he was a “well-known gentleman of a suburban community” near Boston.

11. Ibid., 375-377.

12. The internet made these concerns a reality in twenty-first-century resistance movements like the Arab spring and Occupy; while related fears of unconscious mental influence are also at play in what some analysts see as the weaponization of social media through psychologically-exploitative algorithms.

13. William Benjamin Carpenter, Principles of Mental Physiology: With Their Applications to the Training and Discipline of the Mind, and the Study of Its Morbid Conditions (London : H.S. King & Co., 1876), 633.

14. Silas Weir Mitchell, Dr. North and His Friends (New York: The Century Co., 1900), 95.

15. “818. The Wound-Dresser. Walt Whitman. 1909-14. English Poetry III: From Tennyson To Whitman. The Harvard Classics”. 2017. Bartleby.Com. http://ift.tt/2nOGp6w.

16. Bierce, Ambrose. 1909. Iconoclastic Memories Of The Civil War. 1st ed. Girard, Ka.: Haldeman-Julius.

17. “20. Battle Of Bull Run, July, 1861. Specimen Days. Whitman, Walt. 1892. Prose Works”. 2017. Bartleby.Com. http://ift.tt/2oI8Bvw.

18. Glenn W. LaFantasie, Gettysburg Requiem: The Life and Lost Causes of Confederate Colonel William C. Oates (Oxford: Oxford University Press, 2007).

19. An example of this metaphor is Oliver Wendell Holmes, Mechanism in Thought and Morals. An Address Delivered before the Phi Beta Kappa Society of Harvard University, June 29, 1870 (Boston, J. R. Osgood & co., 1871); for the long history of the metaphor see Margaret Boden, Mind As Machine: A History of Cognitive Science (Oxford: Oxford University Press, 2008).


Public Domain Works

  • Mention of the Ode in James Russell Lowell: A Biography (1901) by Horace Elisha Scudder.
  • “Report of Committee on Phantasms and Presentiments”, by Josiah Royce in Proceedings of the American Society for Psychical Research 1:4 (1889) 373.
  • “Ode Recited at the Harvard Commemoration” (1865), by James Russell Lowell.
  • *Principles of Mental Physiology (1876) by William Benjamin Carpenter.

Further Reading

Midnight in America: Darkness, Sleep, and Dreams during the Civil War (The University of North Carolina Press, 2017)

by Jonathan W. White

In this innovative new study, Jonathan W. White explores what dreams meant to Civil War–era Americans and what their dreams reveal about their experiences during the war.

Books link through to Amazon who will give us a small percentage of sale price (ca. 6%). Discover more recommended books in our dedicated section of the site: FURTHER READING.

from Arts & Letters Daily http://ift.tt/2oCrLDL
http://ift.tt/2oQn4VC