Tracking Your Eyes

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE FEBRUARY 2015 NEWSLETTER including this month’s freebie.

“Eye tracking for marketing purposes?!”

I was flabbergasted, and not in a good way.

This was my reaction a couple of years ago when I attended a training for a machine I was going to be using as part of the neuroscience research I was carrying out. I wanted to measure skin conductance and heart rate of volunteers whilst they were in the scanner. The company who had developed the machine proudly told how they were using eye tracking software for marketing purposes.

I’d become familiar with eye tracking during different research studies I’d been connected with. We’d hooked volunteers up to eye trackers during brain scans in the fMRI (function magnetic resonance imaging) when investigating autism, empathy and pain circuits. I also knew from my natural medicine background that eye movements are an integral part of some therapies: NLP (neuro-linguistic programming) practitioners observe the eyes to assess how a client stores and retrieves information (and can tell even if they are lying), and EMDR (eye movement de-sensitisation and reprocessing) uses eye movements to rework trauma.

The idea that monitoring my eye movements when entering a virtual supermarket to see if I went for branded products, or if my eyes went more to the health food section in order to gather information to push more products on us was one that made my flesh crawl. It all felt very science fiction to me, and far too invasive. This was just two years ago….Now Google has launched its Google Glass. So it seemed like a good time to dive into this whole subject for ‘On the Border’. In doing so I’ve had my own eyes opened and my resistance challenged to some advantages as well as manipulative disadvantages.

Loosely defined, eye tracking refers to any technology that can monitor the direction of our gaze and the behaviour of our eyes, in the process generating data that give clues to our intentions. Interactions with devices equipped with eye-tracking sensors and software can seem intuitive and effortless, as if our gadgets are reading our minds.

Not so incidentally, as the technology advances, researchers are learning ever more about the workings of our eyes and unobservable aspects of the mind: our thoughts and mental focus and the pathways into our consciousness. Eye tracking can reveal whether we are processing the things in front of us or are mentally adrift, whether we recognise a face or have never encountered it before— or whether we did encounter it but then forgot. Our new understanding of eye movements is also spurring development in a host of industries, especially gaming, computers and health care. Marketers are eager to tap into our gaze patterns, too, with implications for privacy.

Researchers developed eye tracking primarily to learn about basic visual processing (say, how we meld independent streams from each eye into a single mental image). Clinicians were also interested in how eye movements relate to disorders involving vision problems, such as vertigo. Initially eye tracking was a matter of simple observation. The experimenter would sit across from a person and take notes about the behaviour of the subject’s eyes.

Early findings were surprising. Despite our subjective experience of vision as a smooth sweep across a stable landscape, the movement of our eyes is anything but steady. In most instances, our eyes stay relatively still for extremely short periods (usually around a third of a second), followed by rapid jerks until they alight on their next target. The short, still periods are known as fixations, and the quick jumps are called saccades.

The main reason for the jerky behaviour is that our visual sweet spot is very small. Indeed, the part of the visual field that delivers a sharp image is about the size of a small coin held at arm’s length, with quality falling off sharply toward the periphery. So we move our eyes constantly to bring new pieces of information into central focus. That herky-jerky motion posed a puzzle for investigators: How, despite the constant movement, do we experience vision as stable?

Hence the quest for hardware that can track every movement of the eyes, no matter how fleeting. Early in the 20th century psychologist Edmund Huey of the Western University of Pennsylvania created a device that could correlate eye movements with the words on a page as someone reads. It was a rather invasive apparatus, involving a plaster cup, worn on the eyeball, with a tiny hole through which the subject could see. A lever was attached to the eyecup and to the lever a pen, which made contact with a rotating drum as the participant read. To minimize irritation, the eyeball was anesthetized with cocaine, and the head was held in place using clamps and a bite bar. Other early contraptions used combinations of contact lenses, suction cups, embedded mirrors and magnetic field sensors to triangulate the focus of the viewer’s attention.

Nowadays researchers rely on the way the cornea reflects light to chart the rotation of the eyeballs. In a typical experiment—say, to study reading or attention—a participant sits in front of a computer with her head on a chin rest. A small camera at the base of the computer zooms in on one (or both) of the eyes as diodes emit near-infrared light (people cannot perceive light at that wavelength, so they experience no discomfort). The light bounces back to the camera, and computer algorithms convert the reflection data into a real-time gaze path of the eye. By combining information about the pattern of corneal reflectance, the displacement of the pupil and the location of the computer in relation to the participant, tracking systems can tell precisely where the participant’s gaze falls on the computer screen. A greater challenge came when research moved beyond the laboratory. Spatial accuracy is critical in eye tracking because a minute error in measuring the orientation of someone’s gaze will throw off any interpretation of what the person is seeing. Achieving that precision is harder in the outside world because head and body movements can interfere with measurements of gaze location. So researchers developed devices that superimposed information about the gaze onto the environment, such as a helmet topped by a camera that melded a recording of eye movements with a real-time video of the subject’s field of vision. Today’s wearable tracking devices take the form of light-weight goggles, but the principle remains the same: a small sensor tracks the dark spot of the pupil to pinpoint the direction of the gaze, and a tiny camera mounted directly between the eyes records the scene.

 

Knowing What We See

Besides exposing the mechanics of vision, eye tracking can also help us understand the invisible elements of cognition: what we remember, how we feel, and what we are paying attention to, whether we are aware of it or not. For instance, eye movements can reveal when we are looking at something we have seen before, even if we have no memory of encountering it. In a 2012 study led by psychologist Deborah Hannula, subjects were asked to memorise an image of a face. When shown a panel of faces that included the original, the subjects spent more time examining the image they had seen before than those they had not. If the images instead included a slightly manipulated version of the original face, the subjects still tended to identify the altered image as the real thing. Yet their eyes were not fooled. The subjects spent less time looking at the manipulated images than the original, suggesting that the eyes recognized them as fakes. These findings have important implications for interpretation of eyewitness testimony— say, in gauging whether someone looking through a book of mug shots has seen one of the faces before.

Patterns in eye movements can also give us insights into thinking and emotion. In a 2009 study, researchers used eye tracking to examine how people regard threats. They discovered that the subjects’ eyes moved faster toward threatening faces and body postures than benign ones, suggesting that our oculomotor system is primed to detect imminent danger. Individuals who are scared or anxious also show a bias toward threatening objects and faces and have a harder time moving their eyes away from the threats than other people do. Reinforcing this finding, a 2014 study found that people are quicker to focus on an angry face in a crowd of happy faces than they are on a happy face in a crowd of angry faces, suggesting that danger more than singularity is what draws the eye.

Our eyes are also markers of mental effort. Eckhard Hess, a pioneer of pupillometry (the measurement of pupil size) in the 1960s, found that the pupils of his participants dilated when they performed challenging multiplication problems, much as our pupils widen when we enter a dimly lit room. The pupils are an ideally objective structure for research. Unlike our eye-balls, which we can consciously direct—say, by looking one way or another—we have no voluntary control whatsoever over our pupils. Researchers hope that analysis of pupil measurements will help reveal when workers are overtaxed, especially those in risky jobs such as air traffic controllers, baggage screeners, truck drivers and surgeons.

Similar research can help the desk-bound pay attention to what they are doing, too. Psychologist Erik Reichle is working on a system that can let people know when they are zombie reading—that phenomenon by which we move our eyes over text for a while without taking in a word we are seeing. In a 2010 study, Reichle had discovered that our eyes behave differently when we lose mental focus. If we are concentrating, our fixations tend to be shorter when we look at familiar words and longer when we look at less common ones. That variation is absent when we read mindlessly, even though our eyes are still hitting the mark. Now Reichle is trying to develop algorithms that can sift eye-tracking data and alert readers as soon as their attention wanders.

 

Practical Tracking

As we learn more about the relation between eye movements and the mind, eye-tracking technology is finding its way into real-world applications, especially in the control of digital devices, gaming and health care. Eye trackers can already replace the mouse for such tasks as clicking, zooming and scrolling. Users might click by staring at an icon for a period of time, zoom in or out by fixing on a location and pressing a controller key, and scroll by moving their eyes up or down.

Adding an eye-tracking system to a computer or tablet is apparently fairly simple. The devices, which incorporate a light source and sensor, are small and sleek and adhere to the bottom of a monitor or the frame of a laptop or tablet, connecting through a USB port. They are relatively affordable, with models from companies such as the Eye Tribe and Tobii costing approximately €100. Users install the relevant software and complete a quick calibration procedure (usually a training program that teaches the software the characteristics of the user’s eyes). Current devices require the user to do some programming (such as creating drivers), so they are not (yet) plug and play. Computers and tablets with built-in eye-tracking technology are expected to reach the market soon, including, according to rumours, an Apple iPad Pro. Indeed, Apple submitted a patent in 2013 for eye-tracking technology that would address the tendency of an image to fade from our perception if we stare at it too long.

Mobile devices that monitor eye movements are also making their way into the market. If you want a fumble-free shutter button on your iPhone, one app already lets you take photographs by winking. Google Glass will do the same. The Samsung Galaxy S4 and S5 phones already let users pause videos by looking away from the screen or turn pages on an e-book with a tilt of the head.

The industry most eager to adopt eye-tracking technology may be gaming. An eye-tracking version of a shooter game, for instance, would let players move their avatar around a virtual world by looking at the spot where they want to advance. Pressing a key might open a menu of weaponry; players would select items by blinking and attack by looking at the target and pressing the trigger key. According to an online story in the January 2014 Gizmag by Jonathan Fincher, gamers who tested early versions of the technology said that using it was a little uncomfortable at first, with the urge to reach for the mouse especially hard to resist, but in the end they found that aiming and shooting with their eyes was faster and more accurate. Eventually players equipped with eye-tracking gear will likely have a speed advantage over those using standard mouse controls, and nothing drives a technology like an arms race.

 

Good Medicine

Eye tracking is also on its way to becoming an important tool in health care. Already the technology has streamlined the screening and diagnosis of a variety of disorders with visual components, and it will soon help people with disabilities navigate the world.

On the diagnostic frontier, eye tracking is particularly useful in detecting Parkinson’s disease, schizophrenia, and a host of childhood maladies, including autism, attention-deficit hyperactivity disorder (ADHD) and dyslexia. People with these disorders have unique patterns of eye movements that simple computer tests can spot. In pioneering work at the University of Southern California, for example, neuroscientist Laurent Itti’s lab devised algorithms that have helped identify people with Parkinson’s with 90 percent accuracy and people with ADHD with nearly 80 percent accuracy. A 2014 study by psychologist Eva Nouzova of the University of Aberdeen and her colleagues reported progress in using eye tracking to diagnose major depression. And in work published in 2012 psychologist Philip Benson, also at Aberdeen, and his colleagues developed tests that can distinguish patients with schizophrenia from healthy controls with nearly perfect accuracy.

The schizophrenia test takes advantage of an anomaly in eye movements: when we track a moving object like a ball flying through the air, we follow the object smoothly, without saccades. The implication is that smooth pursuit uses different neural circuitry than activities such as reading. When people with schizophrenia try to follow moving objects, however, their eye movements are jerky. So to screen for schizophrenia, technicians ask subjects to follow a dot as it moves around a computer screen and flag anyone whose eyes show telltale saccades. (Benson’s team won an award for its research and will use the prize money to bring the program to market.)

Beyond diagnostics, researchers are using eye tracking to help people with physical disabilities live independently. Individuals with neurological disorders and brain and spinal cord injuries often have limited ability to communicate. Computers equipped with gaze-interaction technology would let people use their eyes to open a browser, find their e-mail inbox and “type” by selecting words on the screen. For those who cannot talk, voice-output systems would play the text through speakers. For most, the systems would likely supplant so-called BCI (short for brain-computer interface) spellers, in which a person observes a grid of letters while wearing a cap studded with electrodes that can identify brain activity. To select a letter in the grid, the user must focus on it for several seconds. Eye tracking, in contrast, can detect the location of a viewer’s gaze instantaneously.

 

Do You See What I See?

Like many new technologies, eye tracking raises a host of ethical and privacy concerns. In this increasingly data-driven age, we have cause to wonder who will have access to the kind of information our technology collects. Any number of people could be looking over our shoulders as we browse the Internet with an eye-tracking PC or drive a car with a tracker installed (such as Hyundai’s HDC-14 concept car). Could advertisers get hold of this information? What about insurance companies or the police?

Currently advertisers use cookies to track the Web sites you visit so they can serve you ads for products that might interest you. When computers come with eye-tracking systems, these advertisers could use information about where you are looking on a page to tailor the ads even more. Some users might find the fine-tuning helpful, but imagine if pop-up ads moved around a page with your gaze or the video ads on YouTube “knew” when you were not watching them and paused until you looked at them again. These tricks are well within the scope of the technology, and clashes with consumers are bound to occur.

In 2012, for instance, Microsoft patented eye-tracking technology for its Kinect gaming devices to let the company collect information about where users were looking on a screen while playing, causing worries that Microsoft would be tracking which ads gamers were looking at and for how long. The company got into hot water over privacy concerns the following year, with rumours that the company would sell Kinect data to marketers and use Kinect for targeted advertising. Microsoft was also planning to tailor ads to the mood of the user by running the images captured by the eye-tracking system through facial- expression analysis. Some Kinect users voiced concern that the device would be always on and always listening, like Big Brother. Microsoft responded with a series of statements in October 2013 assuring users that they would be able to turn off the device and ad-tracking features and that the company would not collect the data unless the user wanted it to.

The use of eye tracking as a means of identification is evoking more uneasiness. Researchers in the computer science department at Texas State University are testing biometric systems that can identify people from their unique eye-movement patterns as they read text or view a picture. In recent studies, the accuracy of eye tracking in identifying subjects was a little more than 70 percent. That rate is far below the accuracy of iris scans (90 to 99 percent) or fingerprints (up to 99 percent). As computing systems and tracking technologies develop, however, the gap will likely narrow. Even now the technology has clear benefits for home security and protection of technology. For example, intruders would be locked out of your computer because the system would know from their eye movements that they were not the owners. The technology is also easier on the subject than iris scans, which require the user to hold still. The worry, however, is that an eye-tracking ID system is amenable to covert and invasive deployment.

No gadget has more potential for invasiveness and general spookiness than Google Glass, a wearable computer that projects images through a series of lenses onto the user’s retina. Like most portable devices, it will have a camera, too, facing outward. Although the current beta version of Glass does not have built-in eye tracking, Google has filed a patent to incorporate the technology into head-mounted devices. The patent, which covers the ability to track gaze and measure pupil size, suggests that Google plans to assess user engagement when people look at ads. Gaze tracking would tell Google what the users were seeing (ads, objects, people); pupillometry would measure their emotional response to these objects and to people in the environment. Armed with these data, Google could deploy a “pay per gaze” system in which advertisers paid the company for each look at one of their ads. The technology would work with anything in the user’s field of vision, including billboards, magazines and other print media, as well as images displayed on Glass.

The ethical concerns are obvious: the device could potentially identify not only where people were when they were wearing it but also what, and whom, they encountered. Google’s patent addresses privacy issues by making the data collection anonymous and letting users opt out of this form of tracking. Yet will these assurances hold if someone on the NSA watch list happens to pass before your gaze?

In the end, even when eye-tracking technology lets us control the devices that are tracking us, our sense of command may be illusory. If the eyes are the windows to our souls, we need to know who else is looking through them.

 

References:

  • The Moving Tablet of the Eye: The Origins of Modern Eye Movement Research. Nicholas Wade and Benjamin Tatler. Oxford university Press, 2005.
  • The Oxford Handbook of Eye Movements. Edited by Simon Liversedge, Iain Gilchrist and Stefan Everling. Oxford university Press, 2011.
  • High-Throughput Classification of Clinical Populations from Natural Viewing Eye Movements. Po-He Tseng et al. in Journal of Neurology, 260, No. 1, pages 275–284; January 2013.
  • Look Into My Eyes. Arryn Robbins and Michael C. Hout. Scientific American Mind, Vol. 26, No. 1, pages 54-61; January/February 2015.

 

How to Beat Burnout

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JANUARY 2015 NEWSLETTER including this month’s freebie.

You lie in bed in the morning, reluctant to get out from under the warm embrace of your duvet. After several bleary minutes, you finally rouse yourself, throw on some clothes and head to the office. Having arrived at your desk, you stare blankly as e-mail loads on your screen. When you first started this job, you derived deep satisfaction from addressing the day’s challenges efficiently and artfully. Yet the optimism that used to buoy you is long gone. Now your morning coffee gives you the only jolt of energy you’ll feel all day….

The details differ by profession, but this state of being is the essence of burnout. It undoes a person’s ability to pursue a happy, healthy and productive professional life. Given that many of us spend the bulk of our waking hours at work, burnout can pose a real threat to overall well-being.

Often it begins with pure exhaustion. When you are worn out, you invest less in your job. As a result, you accomplish fewer things and feel less effective than you did before. Because work has ceased to offer the same psychological rewards, you start to feel cynical about your role. This set of emotions— exhaustion, feelings of inefficacy and cynicism—feed off one another, producing a vicious cycle of deepening burnout.

So do you just quit? Quitting is probably not the answer, although you might want to look for a different job. To recover a professional joie de vivre, it helps to understand the basics of burnout from a psychological perspective. Decades of research have revealed several core truths about the syndrome. First, banish the idea that it arises from a personal failing. People who face burnout do not lack some essential quality, such as work ethic, resilience or self-confidence. When all goes well, we naturally tend to bring dedication and pride to our work. Burnout represents the erosion of these noble qualities. Research has consistently pointed to management practices and poor job designs as the leading causes. The ways supervisors lead, and the structure of employees’ workdays, fail to bring out the best in people.

If you suffer from burnout, your relationship with your job has gone sour. Just as a fight with a partner or close friend can exhaust you and cause you to pull away from that person, so can a soured relationship with your job sap your enthusiasm and alienate you. Relationships are complicated things, however, so there is no single solution, no magic bullet, no “one size fits all” approach. Yet with patience and optimism, anyone can find a path back to engagement.

The Rise of Burnout

The use of the term “burnout” began gaining popularity in the 1970s, especially among people working in human services. Herbert Freudenberger, a psychologist at an alternative mental health agency, and Christina Maslach, professor of psychology at the University of California, Berkeley, wrote early articles describing idealistic young professionals in health care and social work who were overextending themselves. They felt discouraged because they did not have sufficient resources to do their jobs well. Instead of building a better world, they felt they were marking time in a dysfunctional system.

Psychologists’ understanding of burnout has since broadened to include any job and a wide range of causes. The most familiar reason for burnout is exhaustion from working too hard with insufficient rest. Yet that condition alone does not cause burnout, nor is it the only route. New entrants to the workforce can find their hopes dashed on entering jobs incompatible with the values they have been taught. Midcareer employees can feel disappointed that they have not advanced as they had hoped. People in service jobs are susceptible because of the high tedium and inflexibility of their workdays. Any environment where conflict or incivility is rampant can also produce burnout.

Dozens of studies support the idea that burnout has three main components: exhaustion, cynicism and inefficacy. Experiencing one of these dimensions alone is a risk factor, but qualities of the workplace can conspire to produce the other two facets, pushing a person into true burnout. The three feelings tend to be related—for example, you would not stay in a purely cynical state over the long term. Either you would start to feel exhausted and ineffective, or something breaks the trend and you find a way to reengage with your work.

Ultimately the true culprit is a mismatch between a person and a job. You might not have the resources you need, or your bosses might expect you to complete a task in a way that clashes with your principles. For example, health care providers in surveys often cite tensions between their professional ideals—to be emotionally supportive to their patients—and the constraints that undermine that goal, namely insufficient staffing and outsized workloads. The
quantity of work is important, but the real trouble arises from an employee’s perception of his or her performance.

Another type of mismatch stems from lack of control. Letting people make decisions about how they spend their days is vital to a healthy work arrangement, but a sense of control can be easily eroded. Managers who set unrealistic expectations for an employee contribute to its loss. So do colleagues who do not communicate well. We all rely on others while doing our jobs, and poor communication can make our workdays more difficult and unpredictable than they need to be. When people feel that they lack control over their own work, they are particularly prone to feeling cynical and ineffectual.

Bosses who fail to express their appreciation also contribute to workers’ feelings of inefficacy. Indeed, research has shown that negative interactions with a supervisor incline a person toward burnout. Yet not all praise is created equal. For example, in the studies reported, there was one organisation in which employees resented an employee-of-the-year award. The rank and file perceived the accolade as an indicator of who was in the good graces of company leaders and little else. Seemingly inequitable promotions can similarly harm engagement. In a 2014 survey conducted by Leiter of people’s feelings of burnout, one respondent wrote, “It is difficult to watch the randomness of why some are promoted and others are ignored. It drains the spirit from you.” That interviewee directly linked a feeling of being unappreciated with a loss of energy—a strong indication of burnout.

Early Warning Signs

The emotional distress of this syndrome can persist for years. Because it can become chronic, researchers have investigated if it would be possible to predict—and thus potentially ward off—the emergence of burnout. In a study published in 2008, 446 employees were surveyed from an administrative department at a large university, first at the beginning of the investigation and then again a year later. Numerous areas of their work life were probed to assess burnout. The researchers were curious to see how people who scored high on only one of the three dimensions— say, only high cynicism or only high exhaustion—would rate a year later. If they became more burned out, the investigators wanted to know what tipping point might send them in that direction.

As it turned out, one such indicator was found: workplace fairness. People who perceived favouritism, cheating or other inequities were more likely to be burned out by the end of the study. Conversely, employees who viewed the workplace as a just environment tilted back toward engagement. A coincidental event during the year of the study brought the issue of fairness into stark relief. Investigators uncovered members of the department who were stealing from it. The final survey occurred soon after a few employees were apprehended and dismissed, so the researchers could assess how disruptive this event had been. The thefts undermined trust among colleagues, weakened employees’ sense of job security and, as a result, deepened burnout.

In a follow-up study of more than 4,000 forestry workers, published in 2013, Leiter and colleagues at the Finnish Institute of Occupational Health found that other factors could also be triggers for burnout. In this case, employees experiencing cynicism (but who retained high energy and efficacy) were more likely to re-engage if they felt their organisations communicated with them well, keeping them in the loop on important developments. Those without this belief slid into exhaustion and solidified their incipient burnout. For workers whose early-warning sign was feeling ineffective, the key factors were whether they could exercise diverse abilities and whether the job was predictable. What these results tell us is that there is not just one recipe for burnout. The context or culture of a workplace plays a major role.

The data further suggest that symptoms of burnout should be taken seriously. As part of the Finnish study, data was collected on participants’ purchase of psychotropic drugs, mainly antidepressants, between 2000 and 2008. The people who experienced increasing burnout were more likely than others to use these drugs in the subsequent decade. So if you notice yourself feeling chronically cynical, exhausted or ineffective at work, take a careful look at the characteristics of your job. It might be time to take some preventive action.

The Social Solution

Because burnout depends heavily on the specific relationship between a person and job, broad guidelines for recovery are hard to come by. Nevertheless, it now seems that improving the quality of workplace relationships may be one general way to intervene. Social exchanges between colleagues play a role in many facets of burnout.

First, your co-workers have the skills, information, materials and influence needed to get things done. As you might expect, people share more readily with individuals they admire and trust. Conversely, hostile workplaces eat away at a person’s ability to focus on his or her work. Consider, for example, this anecdote, also from Leiter’s 2014 survey, in which one person articulates the energy tax of negative interactions: “I love my work. I am an avid learner and a very positive person. But I work in a toxic workplace. This is a highly political environment that encourages competition between colleagues, backstabbing, gossiping and hiding information. I find going to work very difficult and come home exhausted.” Other participants in studies have similarly cited the emotional toll of unpleasant interactions. They describe feeling upset for days following a few rude words from a colleague and losing sleep over the incident—both factors that make it difficult to engage in what otherwise might be pleasurable tasks.

An opportunity to try to alleviate burnout arose in a meeting with the leaders of a hospital in 2008. These executives had taken reasonable action to strengthen the sense of community in struggling work groups, including changing team leaders and reassigning or dismissing identified troublemakers. Managers had brought in inspirational speakers and conducted team-building exercises, with minimal success.

To tackle this problem, Leiter and his research team made use of a promising approach already in place within the Veterans Health Administration (vha). In response to similar problems throughout the vha’s hospitals, a team led by Sue Dyrenforth, then director of the agency’s National Center for Organization Development, devised an intervention called CREW, which stands for civility, respect and engagement in the work-place. Knowing that burnout has a social angle, Leiter decided to deploy a version of CREW in several units of the hospital. Some of these units had a long history of problems, others were uncommunicative, and some functioned well but aspired to collaborate more.

Employees were divided into groups of 10 to 15 people from their same unit, and one person agreed to be the head facilitator. Because every team had its own sources of tension, a collection of activities was provided for the groups to choose from rather than instructing them to follow a single script. Before the experiment began, they researchers surveyed all the participants on their perceptions of civility in their unit as well as their own conduct, so their impressions could be compared at the beginning and the end of the program.

Over six months the teams met about once a week. The facilitator might kick off a session by asking a question such as “How do we show respect (or disrespect) for one another here?” Then attendees might do an exercise to help settle a dispute between two people. The meetings gave employees an opportunity to work through strained relationships and practice more productive ways of defusing emotions. During the rest of the week, participants were encouraged to practice specific civility behaviors and log any acts of kindness they witnessed.

In 2011 the results were published from applying CREW to a group of Canadian hospitals. These confirmed that improving workplace civility decreases burnout. Even more encouraging, the same researchers have since found that these gains remained when they were followed up one year later. The results suggest that CREW had esta lished new, self-sustaining patterns of social interaction.

Yet the reviews were not all glowing. The hospitals found the personnel cost of implementing CREW to be a burden. Participants had to go out of their way to fit the sessions into their workdays. Applying the lessons to their day-to-day work life also required sustained effort. Given the occasionally irksome nature of the program, it is actually pretty impressive that CREW can be effective at all ;=)

Finding Engagement

Given that not every company is about to start implementing CREW, what is an individual worker to do? Many corporations may see squeezing every bit out of employees to be to their advantage. Organisations by and large do not expect to retain their employees forever, so they are unlikely to serve their workers’ long-term interests. Employees thus must shoulder the responsibility of maintaining a sustainable work environment themselves.

The “company of one” perspective encourages individuals to think of themselves as independent contractors even when they are in an employment situation. Employees’ primary focus should remain on preparing themselves for the next career opportunity that may arise. Doing so will require establishing work habits that depart from an employer’s vision. In short, thriving in today’s work world—where cost cutting is a prime objective and employees are routinely stretched too thin—requires serious self-management. You will need to stick to a routine, even when pressured to behave otherwise.

Because burnout is a relationship issue, the individual has some, but not complete, control over circumstances. What follows are a few basic strategies for improving your contribution to the relationship. The good news is that many of these suggestions happen to be good for life in general, so you will benefit in many ways from developing these habits.

First up is fitness. A healthy way of life increases your resilience. A combination of sufficient exercise, nutrition and sleep will reduce your vulnerability to exhaustion. Although the job will not change, you will increase your endurance—and maybe even learn to thrive.

Closely related to fitness is a habit of integrating recovery cycles into your life. Demanding work depletes your physical, emotional and cognitive resources. As the saying goes, there is a reason it is called work. Your personal life should afford opportunities to enjoy relationships, catch up on sleep and take time for reflection. To reverse a trend toward burnout, a key step is to establish a firm structure for recovery activities. Lacking a structure, you will not make time for recovery in the course of a busy life.

You can incorporate small amounts of exercise and recovery into the workday, too. The strategy here is simple: get off your arse! Set an alarm every 30 minutes as a signal to get up and walk around. You can devise some activities that would convince an observer that this meandering is a necessary part of your work.

Now let us incorporate the social angle. As demonstrated with CREW, improving the quality of day-to-day exchanges among colleagues reduces burnout. You do not need your entire team to join you on this journey, but if you can recruit a friend or two to share a burnout-reduction project (a short midday walk, perhaps) the mutual support can be powerful.

Receiving good vibes from others is an uplifting experience, but so, too, is expressing them to others. Keep a tally of your own acts of kindness toward colleagues. To whom did you express appreciation today? Collaborating with a companion will, again, help you get the most out of this project.

Last, consider something the Americans call job crafting. You very likely have more latitude in your work than you think. Job crafting is an analytic approach that involves identifying the duties you find tedious and the aspects you find fulfilling. You can develop a plan to spend a bit more of your day on the good parts. Those increments can add up over time. Just ensure that the additional time you are spending on the fulfilling tasks makes a meaningful contribution, so you keep aligned with your colleagues and supervisor/boss.

These ideas may sound like a big investment, but the truth is that burnout can be hard to shake. Once the syndrome has set in, you must commit to a deliberate practice to find your way back to a healthy, fulfilling relationship with work. Yet it can be done, so let’s put that out as a New Year’s Resolution.

References:

Early Predictors of Job Burnout and Engagement. Christina Maslach and Michael P. Leiter in Journal of Applied Psychology, Vol. 93, No. 3, pages 498–512; May 2008.

The Impact of Civility Interventions on Employee Social Behavior, Distress, and Attitudes. Michael P. Leiter, Heather K. Spence Laschinger, Arla Day and Debra Gilin Oore in Journal of Applied Psychology, Vol. 96, No. 6, pages 1258–1274; November 2011.

Organizational Predictors and Health Consequences of Changes in Burnout: A 12-year Cohort Study. Michael P. Leiter et al. in Journal of Organizational Behavior, Vol. 34, No. 7, pages 959–973;
October 2013.

Conquering Burnout. Michael P. Leiter and Christina Maslach in Scientific American Mind, Vol. 26, No. 1, pages 30-35; January/February 2015.

 

Give Your Brain a Buzz at the Electric Pharmacy

Posted Posted in Jayne's blog

This may sound a bit far-fetched for a cold morning in December but many researchers believe that within the next few years this ‘electrifying’ form of treatment could become commonplace. In fact, some scientists suspect that this could launch a new era in treatment that could rival traditional medicines. The technique, called transcranial direct-current stimulation (tDCS), is being investigated for dozens of applications, including helping people recover from brain injury, treating depression, enhancing vigilance and managing pain.

Using electricity to tinker with the brain is nothing new. We have long known that neurons send electrical signals to communicate. In response, scientists have sought to hack these missives to alter or heal the brain.

Decades have been spent struggling to develop pharmaceutical drugs that can enter the brain and heal it, with only mixed success. Now a growing community of brain scientists hope that electricity might succeed where chemicals have largely failed. At least one company—GlaxoSmithKline—is already funding research into electrical therapies. Clinics and hospitals around the world have begun to offer the technique to patients in need of rehabilitation. The Department of Defense in the United States has invested tens of millions of dollars into investigating techniques that can boost cognition in healthy individuals. The media buzz surrounding these developments has inspired a community of hobbyists who are eschewing caution and attempting tDCS at home.

To develop tDCS into a credible therapy and enhancer, scientists will need to answer many lingering questions. They must determine the proper doses of electric current that work best with different ailments. And they must dramatically scale up experiments to clarify the safety and efficacy of these treatments across large populations. Those caveats aside, the growing interest in tDCS suggests patients and physicians are eager for new methods to treat the brain. In a few years’ time, small doses of electric current may be just what the doctor orders.

The Brain Electric

The basic components of this technology are straightforward: a power source and a way to transfer electricity into the brain. So simple is the technique that even humans of antiquity explored rudimentary forms of it. In the first century A.D., for example, Roman emperor Claudius’s physician applied torpedo fish (electric rays) to the skull to treat headaches. Eighteenth-century electrical discoveries led to more sophisticated experimentation. By 1802 Italian physicist Giovanni Aldini had proposed a treatment for depression that involved tapping a patient’s head with direct current from a battery.

The 20th century brought more complex techniques for electrical healing onto the scene. In the 1930s doctors began treating mental illness by inducing seizures with electric shocks. Fifty years later scientists demonstrated that brain regions could be electrically activated either from the outside using large magnets, a technique called transcranial magnetic stimulation (TMS), or with the aid of surgically implanted electrodes. These methods, which involve expensive equipment and in some cases serious side effects, were generally considered procedures of last resort. Drug discoveries, not electrical interventions, were believed to hold the most promise for treating conditions of the brain.

A few researchers nonetheless kept tinkering with a simple form of electrical brain stimulation throughout the 20th century that would ultimately evolve into modern tDCS. The challenge for these pioneers was to find a way to demonstrate and measure the physiological changes this technique could induce.

In 2000 neurophysiologists Michael Nitsche and Walter Paulus of the George August University of Göttingen in Germany developed an ingenious means of doing this. Rather than administering tDCS by itself, they used it to alter the brain’s response to TMS, a more established technology. The researchers first placed their participants’ motor cortex under a large magnetic coil. This device induced electrical activity in the area that controls movement in a person’s right hand little (pinky) finger. Producing such activity caused the little finger to wiggle.

Nitsche and Paulus also placed two electrodes over portions of the motor cortex and linked them to a battery. Turning it on sent a low amount of current in one electrode, through the skull, into the brain and out through the other electrode. Depending on how they configured the setup, Nitsche and Paulus could change the direction the current traveled through the brain, which they discovered would either intensify or diminish the little finger twitching initiated by TMS. In essence, they could use tDCS to fine- tune the effects of the magnetic coil.

At the time researchers already had a strong understanding of the way TMS altered the brain. As a result, this experiment was able to demonstrate convincingly that tDCS had a real effect on neural activity. Somehow the weak current influenced the behavior of neurons above and beyond the changes expected from TMS.

Nitsche and Paulus suspected that tDCS was altering the way in which brain cells could respond to electric signals. Neurons send and receive information by releasing a spike of electricity through connections called synapses. The signals themselves are created by the movement of charged ions in and out of brain cells. In TMS, the powerful magnet can cause neurons to expel and admit ions, forcing them to fire.

But the current created by tDCS has a subtler effect. It produces a river of charged ions that wends its way through the head and brain in a circuit. As this current flows around and through neurons, it can alternately blunt or enhance their responses to other electric messages. Neurons near the electrode that introduces current into the brain become more sensitive to electric signals. But those near the electrode that removes current are less responsive. Subsequent experiments revealed that this approach could manipulate not only the electric signals from a magnetic coil but also those produced naturally in the brain.

In the 14 years since Paulus and Nitsche’s seminal experiment, numerous scientists have confirmed not only that tDCS can change the brain’s activity but also that it offers therapeutic benefits. Whereas other electrical approaches effectively kick certain neurons into action, tDCS serves as a volume dial. The flow of electric current in this technique can either make neurons more voluble or quiet them down.

Jump-Starting Health

The ability to dial up or down neuronal chatter offered neuroscientists an abundance of options for treating disorders. The technique could be used to increase signaling that helps to heal the brain or dampen activity that contributes to dysfunction, or both.

The first challenge in using tDCS as a therapy is identifying the best neurons to target for a given problem. Typically multiple brain regions contribute to a disease, which means many different stimulation setups could bring a patient relief. Consider the case of migraine pain. Brain scans of people afflicted with these severe headaches have revealed that before an attack, part of their visual cortex becomes more active. So Paulus and his colleagues decided to see if quieting that area with tDCS could prevent headaches.

In the study, which was published in 2011, 12 migraine sufferers received six weeks of electric stimulation. For 15 minutes a day, three days a week, the scientists directed current to discourage signaling in the migraineurs’ visual cortex. The remaining individuals received a sham treatment, in which they merely experienced a tingling sensation. Eight weeks later the researchers followed up with their participants. Subjects who had received the full course of tDCS had shorter migraines, with significantly less pain, than those in the sham condition did. The treatment was not a cure-all— it did not make migraines less frequent—but it seemed to at least soften their blow.

Other experiments, however, have hinted at a different way to assuage migraine pain. Namely, stimulating parts of the motor cortex is known to trigger the release of natural painkillers called opioids. A year after Paulus’s migraine study, a second group of scientists, led by University of Michigan pain neuroscientist Alexandre DaSilva, opted to rev up the motor cortex with electric current. During the course of four weeks, seven migraine patients received 10 sessions of tDCS, 20 minutes each, and five others received sham stimulation. The recipients of tDCS did not experience immediate relief, but in the weeks that followed, they reported less pain than individuals in the sham group, and their relief persisted for months. Ultimately these different approaches may allow clinicians to craft customised treatments for patients.

As both experiments demonstrated, the full effects of tDCS can materialise slowly and endure for weeks or months. This is in part because tweaking the rate at which neurons fire can alter the architecture of the brain in lasting ways. When brain cells activate together, the connections among them grow stronger and more numerous. Cells that seldom fire in concert gradually lose their linkages. Adding tDCS can therefore heighten the brain’s ability to rewire itself—its plasticity.

A boost in plasticity can have powerful implications for repairing the brain after damage. During a stroke, for example, the blood supply to a part of the brain becomes blocked, starving neurons and damaging their connections. Through months of rehabilitation, people can relearn lost skills as their plastic brain builds new connections among surviving neurons. Incorporating tDCS could potentially speed up their recovery.

In 2011 Harvard Medical School neuroscientist Felipe Fregni and his collaborators investigated this idea while working with 14 patients who spent two weeks engaging in regular exercises to regain motor control. In addition, the subjects received either 40 minutes of daily tDCS or a sham treatment. Fregni and his colleagues aimed current at injured brain areas in the motor cortex to encourage the growth of new connections. All the patients gained some motor function after two weeks of therapy. Yet the group receiving tDCS recuperated the most. In a sense, tDCS seems to help the brain to help itself.

Rehabilitation and pain management are just two promising avenues of tDCS research. As we learn more about the networks in the brain associated with neurological and psychiatric disorders, many scientists believe the technique could quell symptoms from a host of conditions, including dementia, epilepsy and schizophrenia.

The small, proof-of-concept studies that have occupied scientists for the past decade are now giving way to large-scale, long-term trials. In 2013, for example, University of São Paulo physiologist Andre Brunoni led a six-week, 120-patient trial in which tDCS had comparable benefits to a commonly prescribed antidepressant in treating depression. Brunoni is now expanding his efforts with a pool of 240 patients and other medications. Studies of this kind will be the best way to illuminate the technique’s full potential going forward.

Zap to Attention

Aside from tDCS’s promise as a therapy, many people have become fascinated by how stimulation could change the lives of healthy individuals. If we can suss out the core networks linked with such abilities as critical thinking and creativity, the logic goes, scientists could give those areas a boost. Or they could dial down unwanted negative emotions and bad habits.

Indeed, growing evidence supports the idea that tDCS could enhance the brain’s abilities in domains as diverse as curbing junk food cravings and modifying mood. Several research groups have proposed that tDCS can accelerate an individual’s ability to master new words and grammatical rules, as well as complex motor tasks such as memorising an intricate finger-pinching pattern.

These studies are still relatively small in size and their effects are modest. Any attempts to elevate performance using tDCS would almost certainly need to occur in tandem with—not instead of—old-fashioned approaches to learning. Just as Fregni’s rehabilitation study coupled stimulation with traditional physical therapy, users seeking cognitive enhancement would still need to avail themselves of lots of practice, exercise and rest.

Yet tDCS might exceed conventional approaches in a few choice areas. In 2014 biomedical engineer Andy McKinley of the U.S. Air Force Research Laboratories published his findings on the use of tDCS in improving vigilance. McKinley’s team worked with 30 military recruits who had to endure 30 hours of wakefulness while taking a series of attention tests. Four hours in, the researchers gave 10 recruits a 40-minute session of tDCS and some plain chewing gum. In this case, the team aimed electric current to stimulate activity in the dorsolateral
prefrontal cortex, an area supporting many functions, among them attention and working memory.

The remaining recruits had a session of sham stimulation at the four-hour mark, and 10 of these subjects also  obtained a stick of caffeinated gum. In this way, McKinley could pit tDCS head-to-head with caffeine, one of humanity’s favorite vigilance enhancers. At the end of the trial, the researchers discovered that the group receiving tDCS showed pronounced improvements in their test scores, and this boost in alertness lasted for six hours. Caffeine’s kick, in contrast, was less potent and persisted for only two hours. Nothing beats a good night’s rest—but perhaps swapping java for a jolt of electricity could someday help the sleep-deprived.

Ready for Prime Time?

Scientists working with tDCS are now at a crossroads. On one hand, researchers are still unraveling the basic mechanisms of tDCS, and on the other, patients, government agencies and companies are angling to bring this technique to the real world.

One of the major remaining questions to address is dosage. Thus far studies have used very weak levels of stimulation to produce modest findings. Scientists now need to figure out where the electrodes should ideally be placed for a given condition, as well as the optimal intensity of stimulation. As with drugs, increasing dosage may improve results—but only up to a point. The conditions for this will be unique to every application, but it is possible that practitioners will at some point face trade-offs. For example, raising current to treat migraines might further diminish a patient’s attacks, but the stimulation itself could become increasingly uncomfortable. Without larger trials, we cannot fully appreciate either the limitations or potential of this technique.

Given these known unknowns, the research community is deeply concerned about individuals trying to replicate these studies at home. Although the findings in this field are tremendously exciting, there is still much more we have to learn about electric stimulation. In the interim, the public and scientists alike must approach the hype surrounding tDCS with care.

As long as we balance our optimism with caution, however, this research offers enormous benefits. The surge of interest in tDCS has even sparked studies of other bioelectric therapies, such as techniques that apply alternating or pulsed current, which can offer unique benefits to the brain. All these advances could pave new routes to self-improvement. Millions of people who struggle with conditions that have long eluded treatment, such as chronic pain and depression, may finally be aided. To achieve this outcome, clinicians will have to adopt electrical stimulation in their practice. A little further out, putting on a cap for healing and thinking may become as commonplace as popping a pill or sipping your morning coffee.

References:
■ Excitability Changes Induced in the Human Motor Cortex by Weak Transcranial Direct Current Stimulation. Michael Nitsche and Walter Paulus in Journal of Physiology, Vol. 527, No. 3, pages 633–639; September 2000.
■ Your Electric Pharmacy. Marom Bikson and Peter Toshev in Scientific American Mind, Vol. 25, No. 6, pages 56–61; November/December 2014.
■ Neurophysiological and Behavioral Effects of tDCS Combined with Constraint-Induced Movement Therapy in Poststroke Patients. Nadia Bolognini et al. in Neurorehabilitation and Neural Repair, Vol. 25, No. 9, pages 819–829; November/December 2011.
■ The Sertraline vs. Electrical Current Therapy for Treating Depression Clinical Study. Andre R. Brunoni et al. in JAMA Psychiatry, Vol. 70, No. 4, pages 383–391; April 2013.

Have You Heard About ‘The Social Cure’?

Posted Posted in Jayne's blog

You can probably remember some morning you struggled to get out of bed. Maybe you kept thinking about the exam you failed, the party you were not invited to or the job you didn’t get. If you are clinically depressed, every day is like this—but worse. Nothing you used to enjoy is fun anymore, and you lack the will to do what it takes—to exercise or reach out to a loved one—to pull yourself out of your gloom.

Depression is the leading cause of disability worldwide, according to the World Health Organisation. About 20 percent of people worldwide will experience it during their lifetime. This risk is highest for women, young adults and those living in disadvantaged communities or developing countries. If you let down your boss or your child because your misery overwhelms you, depression spreads outward to others and affects society.

Treating depression is tricky. Antidepressant medications have side effects such as drowsiness, sexual dysfunction and weight gain that cause many patients to stop taking them. Nearly one third of patients do not respond to their initial treatment, and of those who do find relief, four out of five will become depressed again later. On average, people relapse about four times across the course of their life. New strategies for treating the illness are desperately needed, especially in places where medication and psychotherapy may be unaffordable or unavailable. Accumulating evidence now supports a simple, inexpensive approach that may fill a large part of the treatment gap. Research data now shows that joining a group, or several groups, can both prevent and cure depression. The type of group is irrelevant as long as it matters to you. It must become an integral part of who you are.

The Ache of Isolation

The American Psychiatric Association recommends two kinds of first-line treatments for most cases of depression: antidepressant medication and psychotherapy. Both therapies can work quite effectively, either by changing brain chemistry or by altering one’s perspective on life events. Both rest on the assumption that depression is a problem within an individual. Yet evidence suggests that the disorder has potent external triggers. In particular, 60 to 90 percent of people who become depressed have recently suffered some kind of loss— of a job, friendship or romance, for example. In addition, depression preferentially strikes those who live alone. And in recent years researchers have discovered that a sense of social isolation, often arising when you stop participating in activities you used to enjoy, heralds depression within a year. In a study of 229 middle-aged and older adults published in 2010, social neuroscientist John Cacioppo of the University of Chicago and his colleagues found that individuals who reported being lonely at some point over a five-year period were far more likely to develop depression symptoms a year later than were those who scored low on a measure of loneliness, independent of age, gender and initial depression severity.

In fact, loneliness often precedes the most devastating consequence of depression—suicide. In a study published in 2012 psychologists investigated various long-term predictors of suicidal thoughts—including psychological factors, family, social networks and availability of social support—in 1,356 people living in rural New South Wales (Australia). The researchers found that those with the lowest level of social support were the most likely to be thinking about killing themselves one year later.

The Social Cure

The more we learn about depression, the more social isolation seems to be a key factor in its expression. Interactions with others, then, might logically guard against the illness. Such contact works only when a person develops a sense of belonging, however. In another study from 2012 social psychologists surveyed 194 adults about how much they saw and spoke to members of their immediate family. They also asked these people how much they thought of their family as an important part of who they are. The amount of contact with family was only weakly related to whether people showed symptoms of depression, but identifying with their family was highly protective. The same result held for a different type of “family.” Among 150 members of an army unit from an Eastern European country, feeling closely associated with their unit seemed to stave off depression far better than simply spending time with other soldiers.

A number of other researchers have replicated this result. 16 studies, including more than 2,600 participants, were analysed to determine whether depression was related to how much a person identifies with a group. The groups ranged from support groups for patients recovering from heart surgery in Norway to students in secondary schools in Australia. The common finding across all studies was that the more someone identified with a group, the less severe his or her depression symptoms were. Thus, a sense of connection to a group, rather than just contact with individuals, is what protects mental health.

Groups also can serve as effective treatment for depression. In a 2013 study data was examined from more than 4,000 English adults older than 50 that related to their current group memberships and depression symptoms. The surveys were completed several times over eight years. Researchers found that group membership not only enabled nondepressed people to avoid the disorder but also powerfully aided recovery over time for people who had been depressed. Depressed respondents with no group memberships who joined a single group reduced their risk of relapse from 41 to 31 percent; among those who joined three groups, the risk of relapse dropped to 15 percent [see graph below].

To be effective as therapy, however, the group you join must be important to you. In a study published this year we, a group-based intervention was tested in individuals at risk for depression as well as those diagnosed with it. Researchers measured depression symptoms in 52 socioeconomically disadvantaged people at high risk for mental illness immediately after they joined a recreational group and three months later. The scientists also asked individuals how much they identified with their group. Though just attending group meetings—to play soccer, make art, sew or do yoga—did not significantly lower depression scores, identifying with the group was associated with a marked decline in symptoms. Similarly, when 92 people diagnosed with depression or anxiety who joined a therapy group in a psychiatric hospital clinic were studied, those who strongly identified with the therapy group were more than twice as likely to recover as those who felt only weakly connected to it.

Equal-Opportunity Remedy

Groups exert these powerful psychological effects because humans are social beings. We have evolved to act as part of a team. Indeed, researchers have found that just thinking about your social groups can make you less likely to get sick after being exposed to a virus, less apt to lash out at those who have wronged you and more tolerant of physical pain. Groups provide a sense of belonging. They also can give life meaning—something that is lost in depression—in part because we are better able to achieve goals when we work with others. Rates of depression and suicide drop markedly in wartime, for example, because people find meaning in working together to defeat an enemy. And of course, other members of your in-group can supply both emotional support and practical assistance in times of need.

Not all groups influence their members in positive ways, though. For instance, studies show that teenagers are much more likely to harm themselves if they hang out with others who self-harm. Breaking away from substance-using social groups is associated with a reduced risk of relapse among those in treatment for drug or alcohol abuse. In a 2010 study of First Nations adults in Canada concluded that adopting a social identity associated with perceived discrimination might make a person more vulnerable to depression.

In general, however, social groups are antidotes to unhappiness, and joining them is a cost-effective adjunct to other depression treatments. Engagement with groups might also serve as a stand-alone strategy for those who cannot afford standard therapies or where there is a shortage of mental health professionals. Inexpensive treatments are critical given that the illness disproportionately affects those who are already socially and economically disadvantaged. Receiving therapy in a group can also help people unite to challenge prejudices against mental illness and to work out ways of moving forward together. Groups, then, are not only an effective shield against depression but also a sword that can puncture the stigma that accompanies it.

References:
■ Mental Health Support Groups, Stigma, and Self-Esteem: Positive and Negative Implications of Group Identification. Jason W. Crabtree, S. Alexander Haslam, Tom Postmes and Catherine Haslam in Journal of Social Issues, Vol. 66, No. 3, pages 553–569; September 2010.

■ The Social Cure: Identity, Health and Well-Being. Edited by Jolanda Jetten, Catherine Haslam and S. Alexander Haslam. Psychology Press, 2012.

■ Social Group Memberships Protect against Future Depression, Alleviate Depression Symptoms and Prevent Depression Relapse. Tegan Cruwys, Genevieve A. Dingle, Catherine Haslam, S. Alexander Haslam, Jolanda Jetten and Thomas A. Morton in Social Science and Medicine, Vol. 98, pages 179– 186; December 2013.

■ The New Group Therapy. Tegan Cruwys, S. Alexander Haslam and Genevieve A. Dngle. Scientific American Mind, Vol. 25, No. 5, pages 61-63; September/October 2014.

■ Feeling Connected Again: Interventions That Increase Social Identification Reduce Depression Symptoms in Community and Clinical Settings. Tegan Cruwys, S. Alexander Haslam, Genevieve A. Dingle, Jolanda Jetten, Matthew J. Hornsey, E. M. Desdemona Chong and Tian P. S. Oei in Journal of Affective Disorders, Vol. 159, pages 139–146; April 20, 2014.

Can Acupuncture Stop Killer Immune Reactions?

Posted Posted in Jayne's blog

The ST36 Zusanli acupuncture point is located just below the knee joint. This spot in mice—and it is hoped perhaps in humans—may be a critical entryway to gaining control over the often fatal inflammatory reactions that accompany systemic infections. Sepsis is one such infection. Sepsis arises when the body’s response to an infection damages its own tissues and organs. It can lead to shock, multiple organ failure, and death, especially if it is not recognised early and treated promptly. Between one-third and one-half of all sepsis patients die. It remains worldwide the primary cause of death from infection. Antibiotics can control sepsis-related infection, but no current drugs (in the USA) have FDA approval for counteracting the runaway immune response.

Researchers at Rutgers University Medical School reported online in the top scientific journal Nature Medicine on February 23 that stimulating ST36 Zusanli with an electric current passed through an acupuncture needle activated two nerve tracts in mice that led to the production of a biochemical that quieted a sepsislike inflammatory reaction induced in mice.

The finding, which also involved the collaboration of several institutions, raises the possibility that knowledge derived from alternative medicine may provide a means of discovering new nerve pathways that can regulate a variety of immune disorders, from rheumatoid arthritis to Crohn’s disease. If future studies achieve similar results, acupuncture might be integrated into the growing field of bioelectronics medicine—also called electroceuticals—that is generating intense interest among both academics and drug companies.

Clues from Acupuncture

Luis Ulloa, who headed the study at the Center for Immunity and Inflammation at Rutgers, has spent more than 10 years researching how nerve signals control immune function. Following the suggestion of a Mexican colleague, he realised that it might be worth testing whether acupuncture could help discover some of these much sought neuroimmune pathways.

Ulloa and his team used electroacupuncture to stimulate the ST36 Zusanli acupuncture point in 20 mice exposed to lipids and carbohydrates from the outer membrane of bacteria, producing an inflammatory response that mimics sepsis. Another 20 rodents received “sham” electroacupuncture in which nonacupuncture points were stimulated. Half of the mice in the first group survived, whereas all the sham-treated rodents died. A similar survival difference was noted with two groups of mice exposed to a cocktail of microbes in the gut.

Researchers then began to analyse the nerves and organs involved. They traced a pathway beginning in a branch of the sciatic nerve, not far from ST36 Zusanli, that relayed a signal to the spinal cord and then the brain. Once processed there, it was sent down to the vagus nerve, finally reaching the adrenal glands, which produced the key anti-inflammatory agent, the neurotransmitter dopamine. Ulloa’s team set about confirming the parts of this biological wiring diagram by removing independently sections of the key nerves and the entire adrenal glands. Cutting out of any one of these links in this newly discovered neuroimmune circuit abolished electroacupuncture’s anti-inflammatory effects.

The researchers also succeeded in stopping inflammation by using a drug called fenoldopam (Corlopam), which acted as a stand-in for the adrenal-produced dopamine in mice who had the glands surgically removed. Having a drug at hand might be essential because the adrenals in many sepsis patients function poorly, which makes them unsuitable candidates for acupuncture therapy.

The Rutgers work with acupuncture might be a relatively non-invasive means of performing neuroimmune stimulation and researching the interaction between the nervous and immune systems. Studies such as the one from the Rutgers group could help establish a physiological mechanism to explain why acupuncture might work as a treatment.

Testing Ancient Treatments

Acupuncture still has its critics at various ends of the medical spectrum. Some acupuncture supporters perceive a study on sepsis as a case of Western medicine finally conferring a belated blessing on techniques that have been accepted treatments for thousands of years. Skeptics of alternative medicine, meanwhile, criticise any investigation of acupuncture as a waste of limited research money on a folk remedy for which a firm scientific basis will never be found. Steven Novella, president of the New England Skeptical Society, characterises the sepsis study as having merely shown that a nerve responds to the application of an electric current. “Electroacupuncture itself is not a real entity, in my opinion,” he says. “It is just electrical stimulation. Doing stimulation through an ‘acupuncture needle’ is meaningless—it’s just a thin needle. There’s nothing that makes it an acupuncture needle. And there is no evidence that acupuncture points exist at all.”

For his part, Ulloa had no intention of trying to determine whether flows of vital energy, or qi, were making their way through the body’s “meridians” based on the interpretation for how acupuncture works in Chinese traditional medicine. In fact, he agrees with Novella’s argument about nerve stimulation. In the study, the researchers found no anti-inflammatory effect when a toothpick was used to probe ST36 Zusanli, in a manner similar to the way acupuncture needles had been inserted for centuries before the advent of electroacupuncture.

As a prospector for neuroimmune pathways, Ulloa insists his interest in exploring acupoints in his research has not flagged. He has found that it is no coincidence that all acupoints but one—360 of 361 described in humans—are located in the proximity of a major nerve. Ulloa. In his study, ST36 Zusanli led directly to the discovery of one of the most intricate neuroimmune circuits found to date. Instead of testing millions of potential points, they apparently reasoned that acupoints may provide an advantage in stimulating neuronal networks more efficiently.

A few days after the acupuncture paper in Nature Medicine appeared, a study published in Science Translational Medicine documented that a component of the herb Salvia miltiorrhiza (red sage), another hand-me-down from the Chinese traditional medicine pharmacopeia, also turned out to have potent anti-inflammatory properties. The researchers from leading institutions who wrote that paper were taking the same path as Ulloa and his team, attempting to test whether an ancient treatment had through trial and error turned up some biological effect or therapeutic potential that could be subject to a rigorous testing regimen in the laboratory.

In both reports, the authors were following the dictates that top-flight journal editors, article reviewers—and the skeptics themselves—endorse for evidence-based medicine. This type of study will probably be more the exception than the rule. These same journals might never publish on feng shui — and the acupuncture entries in their pages might still be relatively scarce – but if scientists studying acupuncture or herbs can cross the high bars set by the scientific establishment, then the future looks bright for the increasingly scientific anchoring of several areas in alternative medicine.

References:

  • Dopamine Mediates Vagal Modulation of the Immune System by Electroacupuncture. Rafael Torres-Rosas et al. in Nature Medicine. Published online February 23, 2014.
  • A Zebrafish Compound Screen Reveals Modulation of Neutrophil Reverse Migration as an Anti-inflammatory Mechanism. Anne L. Robertson et al. in Science Translational Medicine, Vol. 6, No. 225, pages 225–229; February 26, 2014.
  • Can Acupuncture Curd Killer Immune Reactions? Gary Stix in Scientific American Mind, Vol. 25, No. 3, pages 24-25; May/June, 2014.

Regenerating the Brain

Posted Posted in Jayne's blog

Neurosurgeon Ivar Mendez of the University of Saskatchewan often shows a video clip to demonstrate his work treating Parkinson’s disease. It features a middle-aged man with this caption: “Off medications.” The man’s face has the dull stare typical of Parkinson’s. Asked to lift each hand and open and close his fingers, he barely manages. He tries but fails to get up from a chair without using his hands. When he walks, it is with the slow, shuffling gait that is another hallmark of Parkinson’s, a progressive neurological disorder that afflicts an estimated one million Americans, most of them older than 60. Then the video jumps forward in time. The same man appears, still off medications. It is now eight years since Mendez transplanted dopamine cells from a fetus into the patient’s brain. These neurons, which live in a midbrain region called the substantia nigra and secrete the neurotransmitter dopamine to initiate movement, are the ones that die off in Parkinson’s. The man has aged, but his energy and demeanor are characteristic of a much younger man. Asked to do the same tasks, he smoothly raises his arms high and flicks his fingers open and shut rapidly. Arms crossed on his chest, he rises from a chair with apparent ease. Then he struts down the hall.

In the 25 years since the first few patients received transplants as part of a clinical trial at University Hospital in Lund, Sweden, hopes of using cell-based therapy as a treatment for Parkinson’s have repeatedly risen and then been dashed. Stem cells are a biological raw material of enormous potential because they can generate new cells through the ability to divide indefinitely and to give rise to specialised cells. These cells can then be used to repair brain damage from degenerative disorders such as Parkinson’s. Stem cells, however, have been hard to come by. So far the cells transplanted in humans have been derived from aborted fetal tissue, although scientists have also transplanted stem cells derived from human embryos into animals. Thorny political and ethical issues limit access to both fetal cells and embryonic stem cells, and fetal cells are in particularly short supply. Two large clinical trials using fetal tissue, published in 2001 and 2003, were considered failures because of their widely variable results; not enough patients improved by the study end points, and some developed serious side effects. Many scientists gave up on cell therapy.

But a handful of laboratories persevered. Now new evidence showing that transplantation can work well, as in Mendez’s patient, and possible new sources of cells free of ethical concerns have sparked a fresh optimism. This year neurologist Roger Barker of the University of Cambridge will lead the first large clinical trial of cell therapy for Parkinson’s in a decade.

The momentum most likely will propel cell therapies for other disorders as well. Researchers are trying to apply the technique to more than a dozen diseases, including diabetes, spinal cord injury and several forms of cancer. In addition to Parkinson’s, the most significant progress has been made with retinal diseases. Clinical trials are under way to use retinal pigment epithelial cells for treatment of macular degeneration. According to the California Institute for Regenerative Medicine, theoretically there is no disease to which stem cell therapy could not be applied. In each case, the requirements depend on the difficulties inherent in generating the specific type of cell scientists hope to replace.

Progress in Parkinson’s has been particularly promising. The debilitating movement difficulties characteristic of the disease have a relatively straightforward cause: dopamine loss. And researchers were able to generate dopamine neurons from stem cells quite quickly. Cell therapy typically leads to restored mobility and function—improving patients’ gait, for instance, and reducing tremor—but does not ameliorate every aspect of Parkinson’s. Patients may still suffer from dementia, gastrointestinal problems and sleep disorders, for instance. Yet in the best-case scenario, patients could gain 20 to 30 years of excellent quality of life with a single intervention and require virtually no medications. The disease itself is not cured, but the natural history of Parkinson’s disease is transformed.

 

Delivering Dopamine

A mild tremor in the hand or some other extremity is often the first sign of Parkinson’s. Tremors are followed by rigidity in the muscles, a stooped posture and the distinctive difficulty walking first described by James Parkinson in 1817. The movement difficulties relate to the loss of a dopamine neuron called A9 in the substantia nigra, which among other things controls the initiation of motion. By the time the first tremor appears, patients have already lost about 70 percent of those A9 neurons.

Since the 1960s Parkinson’s has been treated with medications that replace missing dopamine in the brain. L-dopa is a dopamine precursor, and doses of this small molecule cross the blood-brain barrier and enter brain cells, which convert L-dopa into dopamine and release it. Other drugs, known as dopamine agonists, stimulate dopamine receptors in the absence of the neurotransmitter, thereby mimicking its effects. The medications improve parkinsonian symptoms, but their benefits diminish over time, and they carry side effects such as alternating periods of mobility and immobility and the emergence of additional jerky movements.

In the 1990s clinicians developed an alternative therapy called deep-brain stimulation (DBS), the surgical insertion of an electrode that delivers electrical pulses to directly alter neuronal activity in a specific area of the brain. The treatment can work well. At the University of California, San Francisco, for example, 45 to 70 percent of patients who receive DBS for Parkinson’s improve. Yet over time, patients begin to decline again because the electrode stimulation can no longer compensate for the continuing loss of dopamine. Cell-based therapy, in contrast, is designed to directly restore the cells lost in the disease process.

The earlier large clinical trials of cell therapy suffered from multiple problems. For example, it now appears that some of the patients selected were too old and their disease too advanced to get good results. Instead of infusing a substance containing a single type of cell, surgeons transplanted chunks of tissue, which included other material that triggered immune reactions. The procedure itself was conducted differently by every team. Moreover, the end points for the studies were too short—neither was more than two years—for the transplanted cells to take full effect.

Of the patients who have received cell-based therapy for Parkinson’s, those transplanted by Mendez’s team have done best. Mendez began transplanting fetal cells into patients in the late 1990s, when he was at Dalhousie University in Nova Scotia. He improved the preparation of the cells by treating them to encourage growth and creating pure cell suspensions instead of transplanting chunks of tissue. Using a computerised injector that he developed to standardise the process, Mendez targeted two brain areas instead of one—the substantia nigra, where dopamine cells naturally originate, and the putamen, which their axons need to reach. All 10 of his patients improved significantly on the standard Parkinson’s rating scale, which measures the course of the disease. In a separate postmortem analysis of five patients published in 2008, Mendez and Isacson, who have been collaborating for about 10 years, found that the grafted neurons survived without signs of degeneration for as long as 14 years.

 

New Kinds of Cells

The biggest remaining challenge is obtaining enough viable stem cells. The fetal cells implanted to date have been harvested from the midbrain of an aborted fetus aged six to nine weeks. Such stem cells have already differentiated into dopamine neurons yet retain the capacity to generate more new neurons after transplantation. Still, fetal cells are not the answer. Politics aside, there will never be enough for all the patients who would need them.

Another possibility emerged in 1998, when cell biologist James Thomson of the University of Wisconsin–Madison and his colleagues derived the first embryonic stem cell line. They were working with the blastocyst of a human embryo, a brief early developmental stage when the ball of cells contains an inner clump of 20 to 30 cells that are capable of growing into any of the more than 200 types of adult cells in the body. Unlike fetal tissue cells that have started down the path to differentiation, these so-called pluripotent stem cells have the potential to produce any type of tissue in the body.

Thomson’s team removed those cells and nurtured them in the lab so that they divided. The result was an infinitely renewable lab-maintained source of stem cells—a cell line—that would not require further new embryos. The ethics were still complicated by the original use of embryos, but suddenly large- scale cell-based therapy seemed achievable. The challenge was to coax those embryonic stem cells to develop into the specific cells needed to treat a disease—dopamine neurons for Parkinson’s, for instance, or insulin-producing cells for diabetes.

Also in 1998 Isacson’s group reported that it had done just that in mice. The researchers differentiated A9 neurons from mouse blastocysts. When they injected those cells into a mouse brain, they found that the cells lived and formed connections with the other neurons in the brain. In 2002 his group showed that the same procedure restored movement and mobility in a rat with a drug-induced version of Parkinson’s. Several other groups achieved similar recovery in rodents. Immediately researchers tried to create A9 neurons from human embryonic stem cells—but that step proved more difficult.

A breakthrough with an alternative approach came in 2007, when the team of biologist Shinya Yamanaka of Kyoto University in Japan figured out how to create stem cells from an adult’s own tissues. Beginning with adult mouse skin cells, Yamanaka’s team “reprogrammed” the cells biochemically, driving them back to something resembling an embryonic stem cell, which could then be used as a basis for deriving a totally different kind of body cell, such as a neuron.

In essence, Yamanaka’s group had found a way to create a limitless supply of stem cells from adult skin cells, thereby sidestepping the political and ethical issues that surround research with embryos. The accomplishment won Yamanaka the 2012 Nobel Prize in Physiology or Medicine. Furthermore, if the cells, which are called induced pluripotent stem cells, always originate with the individual patient being treated, the considerable risk of immune rejection would disappear. They solved a very big problem.

 

Newly Nimble Monkeys

A year after Yamanaka’s discovery, Isacson’s team showed that it could create A9 dopamine neurons from such reprogrammed adult rodent cells. The scientists soon began putting the new cells in mice and rats with signs of Parkinson’s, and in 2008 they reported improved function. Then they turned to nonhuman primates. Working with a monkey with drug-induced Parkinson’s, Isacson’s group harvested the monkey’s skin cells, drove them back to an embryonic state, then differentiated them into dopamine neurons and put them into the monkey’s brain. For two years, they monitored the monkey. In results presented at conferences late in 2013, they showed that according to PET scans the grafted dopamine neurons had survived and grown. About eight months after the transplant, the monkey’s motor disorder ceased. A postmortem analysis showed that the new neurons had made connections with other neurons throughout the brain area where they had been grafted.

The same year two other groups also reported success with adult-derived stem cells and monkeys, including the lab of cell biologist Su-Chun Zhang of the University of Wisconsin–Madison and Yamanaka and his colleague Jun Takahashi. All three groups have demonstrated that the graft can survive, can differentiate into the right type of cells and then can integrate into the brain structurally.

Isacson’s monkey is the only one to be observed for a longer period—two years—and to have shown functional recovery. The researchers are pursuing longer-term studies with more monkeys to convincingly show both safety and efficacy. Clinical trials could follow, possibly within a few years, say Mendez and Isacson, who are convinced that these adult-derived cells are the future.

Others are still betting on embryonic stem cells. In 2011 Studer’s lab successfully differentiated human embryonic stem cells into dopamine neurons. When grafted into a mouse, rat or monkey with parkinsonian symptoms, these cells now survive and lead to recovery of function. Studer recently received a $15-million grant to perfect his technique and generate cell lines based on GMP (good manufacturing practice) guidelines. In parallel with the work manufacturing large batches of cells, he plans to begin lining up patients for a clinical trial, most likely the first to use embryonic stem cells.

One reason for sticking with embryonic stem cells is regulatory. Cells are not a drug, and they are not a device. So what are they? To date, stem cells have been regulated by line—a set of renewable cells that are cultured in one lab and deemed safe. If stem cells are produced for individual patients—using the full potential of the newest technology—and still are required to follow the same approval process as existing stem cell lines, the therapy would be cost- prohibitive. One solution is to approve a generic induced stem cell process rather than separate lines. Another answer, which sacrifices some immune response benefits, would be to create a bank of up to as many as 500 regulated stem cell lines derived from adult tissue, which could, according to Isacson, be genetically matched to 75 to 90 percent of the population.

 

Revolutionary Treatment

In his upcoming trial, Barker’s team of collaborators will implant stem cells into the brains of 20 patients in Europe and follow 130 other patients whose Parkinson’s is progressing naturally. Learning from past procedural mistakes, the scientists are still using fetal tissue but have tightened the selection of patients, improved tissue preparation and placement, and rethought the length and follow-up for the multicentre trial. The TransEuro study is intended to provide proof of the principle that cell therapy can consistently repair the brain. The importance is the process, which is the stepping-stone to the next generation of cell- based therapies.

Despite its theoretical superiority, populating the brain with new dopamine cells is not yet obviously better than existing treatments such as DBS, which brings faster results. In addition, other treatments in development may prove feasible. For example, in early 2014 researchers at Imperial College London reported promising results from the first gene therapy trials for Parkinson’s patients. In this treatment, doctors insert genes for dopamine-producing enzymes into the striatum, a part of the mid-brain that contributes to movement control.

Many researchers believe, however, that the remaining hurdles in producing and validating stem cell therapy can be cleared for Parkinson’s. These are the first steps in what could be a revolutionary treatment.

 

References:

  • Fetal Dopaminergic Transplantation Trials and the Future of Neural Grafting in Parkinson’s Disease. R. A. Barker et al. in Lancet Neurology, Vol. 12, No. 1, pages 84–91; January 2013.
  • Induced Pluripotent Stem Cell–Derived Neural Cells Survive and Mature in the Nonhuman Primate Brain. M. E. Emborg et al. in Cell Reports, Vol. 3, No. 3, pages 646–650; March 28, 2013.
  • Therapeutic Application of Stem Cell Technology toward the Treatment of Parkinson’s Disease. K. Nishimura and J. Takahashi in Biological and Pharmaceutical Bulletin, Vol. 36, No. 2, pages 171–175; 2013.
  • The Regenerating Brain. L. Denworth in Scientific American Mind, vol. 25, No. 3, pages 59-65; May/June 2014.
  • California Institute for Regenerative Medicine: www.cirm.ca.gov

Melody as Remedy?

Posted Posted in Jayne's blog

‘Music soothes the savage beast’ as the old saying goes. Music is an intrinsic part of our lives from the song playing when you had your first kiss, to music you revv yourself up with (or calm yourself down with) before you have a stressful meeting. As many of you are packing up to go on holiday this month (the schools have just broken up for the summer in the Amsterdam area), I thought I’d bring you some interesting new research about the effect music can have. But first a quickie lesson in the brain areas involved in music….

 

Music and Language, Intertwined

The brain activity for music and language is enormously complicated, and researchers are still trying to determine how the brain handles each process. Below is a sampling of what we do know: Areas in the frontal lobe (orange) help us learn the rules that govern language and music, such as those for syntax and harmony. Regions in the temporal lobe (green) help us perceive and understand sounds, such as the meaning of words and melodies.

The auditory cortex (blue) appears to have distinct music and language roles: the left auditory cortex is important for decoding and discriminating different aspects of speech, whereas the right auditory cortex is more involved in perceiving the pitch and frequency of sound. The insula (red) processes rhythm, perhaps in subtly different ways, for both music and speech. And the corpus callosum (grey) is larger in the brains of musicians, suggesting that musicians require greater communication between the two hemispheres.

 

Singing Your Way to Fitness

Chain-gang chants, military cadences, sea shanties: humans have long paired music making with intense physical exercise. Now research confirms the power of the combination: working out seems easier while producing music, according to a small study published in the Proceedings of the National Academy of Sciences USA.

In the study, half of the participants made music while working out by using software that turned their movements into tunes. These exercisers exerted equal force while pumping iron as did people who merely listened to music during exercise. Yet the music makers used less oxygen during their routine—a measure of exertion— and they also felt they were working less hard than those who just listened.

Music production may make exercise easier by activating so-called emotional motor control, posits Thomas Fritz, a postdoctoral fellow at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and the study’s lead researcher. Emotional motor control is responsible for spontaneous actions such as a genuine smile; deliberate motor control, in contrast, implements purposeful action (such as a fake smile). Activating this more efficient system, Fritz says, may be as easy as singing along or pumping iron in rhythm with the tunes in your exercise playlist. So time to hum along with your playlist while you are jogging…..

 

Pain Presto Relief

Forget grapes and ‘Get Well Soon’ cards—hospitals may soon begin handing their patients MP3 players to speed their recovery. A study at Our Lady of the Lake Regional Medical Center in Baton Rouge determined that ambient music therapy had a positive effect on postoperative patients’ recovery by improving pain management and decreasing the negative effects of environmental noise.

In this study, patients who had undergone surgery for cancer all received standard nursing care. Half of them also got a preprogrammed MP3 player with ambient music—songs without words, played at less than 60 decibels—and were encouraged by nurses to listen for at least half an hour after they took their twice-daily medication. Before treatment, all the patients had similar levels of anxiety, pain and irritation at the amount of environmental noise. Three days later patients who listened to the ambient music said they were able to better manage their pain and were less annoyed by hospital noise, whereas patients without music experienced no change, according to the study in Nursing last fall. Many of us already turn to music to help with emotional pain; these findings suggest we might want to try listening as a salve for physical pain, too.

 

Music Helps Children Read

Today a symphony of research trumpets the many links among language, reading and music, including several that reveal a connection between rhythm and reading skills. Nina Kraus of Northwestern University has discovered a possible explanation: the brains of good beat keepers respond to speech more consistently than the brains of people whose toes do not tap in time. After testing 124 adolescents for beat-keeping ability, the researchers used an electroencephalogram (EEG) to eavesdrop on teen brains as the consonant sound “da” was played repeatedly. With every “da,” the brains of beat keepers responded consistently, even when there was background noise or while they watched television. The brain waves of poor beat keepers, however, were all over the place. The study helps to explain why music may hold a key to improved reading. Because reading ability, in general, relies on making a connection between the sounds of letters and symbols on a page, music provides another avenue into learning. Through music, you learn to pay attention to important sounds. The inconsistent sound processing shown by the poor beat keepers makes that difficult. If you have an auditory system that automatically is able to efficiently pull out sounds that are meaningful, it’s going to be important not just for music but for speech.

 

Fighting Poverty with Pianos

Scientists have observed that reading ability links with socioeconomic status. Yet music might help close the gap, according to Nina Kraus (the same lady as above) and her colleagues at Northwestern University.

Kraus’s team tested the auditory abilities of teenagers aged 14 or 15, grouped by socioeconomic status (as indexed by their mother’s level of education, a commonly used surrogate measure). The researchers recorded the kids’ brain waves with EEG as they listened to a repeated syllable against soft background sound and when they heard nothing. They found that children of mothers with a lower education had noisier, weaker and more variable neural activity in response to sound and greater activity in the absence of sound. The children also scored lower on tests of reading and working memory.

Kraus thinks music training is worth investigating as a possible intervention for such auditory deficits. The brains of trained musicians differ from non-musicians, and they also enjoy a range of auditory advantages, including better speech perception in noise, according to research from Kraus’s laboratory. The researchers admit that this finding could be the result of preexisting differences that predispose some people to choose music as a career or hobby, but they point out that some experimental studies show that musical training, whether via one-on-one lessons or in group sessions, enhances people’s response to speech.

Most recently Kraus’s group has shown that these effects may last. Kraus surveyed 44 adults aged 55 to 76 and found that four or more years of musical training in childhood was linked to faster neural responses to speech, even for the older adults who had not picked up an instrument for more than 40 years. Maybe time to go and pick up my clarinet and oboe again during the summer months…..?

 

Alleviating Alzheimer’s

Many studies have found that familiar songs enhance mood, relieve stress and reduce anxiety in patients with Alzheimer’s, perhaps because musical memory is often spared even when a patient has declined to a low level of cognition. Two new studies find that familiar music also improves cognitive symptoms in the disease.

Familiar music may be a safe and effective way to help patients with Alzheimer’s become more self-conscious, which improves overall mental processing and leads to a more accurate examination of the world.

In a study published in September 2013 by Eva Arroyo-Anlló of the University of Salamanca in Spain and her colleagues, patients listened to either familiar or unfamiliar music three times a week for three months. Those who heard tunes they knew showed an immediate improvement in identity, mood, moral judgment and body awareness—elements of self-consciousness that are adversely affected by Alzheimer’s. Those who listened to unfamiliar music scored worse on all measures except body awareness.

The researchers also administered a common exam for dementia to test the patients’ overall cognition. The group who heard familiar music sustained their scores over time, whereas the group who listened to unfamiliar music faltered significantly. According to the investigators, these findings are yet one more reason that caregivers should provide patients with music from their past.

One of the most devastating aspects of Alzheimer’s is its effect on patients’ ability to recall life events. Several studies have found that music helps to strengthen these individuals’ autobiographical memories, and a paper in the November 2013 Journal of Neurolinguistics builds on these findings by exploring the linguistic quality of those recollections.

Researchers instructed 18 patients with Alzheimer’s and 18 healthy control subjects to tell stories from their lives in a silent room or while listening to the music of their choice. Among the Alzheimer’s patients, the music-cued stories contained a greater number of meaningful words, were more grammatically complex and conveyed more information per number of words. Music may enhance narrative memories because music and language processing share a common neural basis.

Is Depression Just Bad Chemistry?

Posted Posted in Jayne's blog

The general hypothesis in medicine is that a deficiency of certain neurotransmitters (chemical messengers) at synapses, or tiny gaps, between neurons interferes with the transmission of nerve impulses, causing or contributing to depression. One of these neurotransmitters, serotonin, has attracted the most attention, but many others, including norepinephrine and dopamine, have also been granted supporting roles in the story.

Much of the general public seems to have accepted the chemical imbalance hypothesis uncritically. For example, in a 2007 survey of 262 undergraduates at Cleveland State University found that 84.7 percent of participants found it “likely” that chemical imbalances cause depression. In reality, however, depression cannot be boiled down to an excess or deficit of any particular chemical or even a suite of chemicals. “Chemical imbalance is sort of last-century thinking. It’s much more complicated than that,” neuroscientist Joseph Coyle of Harvard Medical School was quoted as saying in a blog by National Public Radio’s Alix Spiegel.

Indeed, it is very likely that depression stems from influences other than neurotransmitter abnormalities. Among the problems correlated with the disease are irregularities in brain structure and function, disturbances in neural circuitry, and various psychological contributions, such as life stressors. Of course, all these influences ultimately operate at the level of physiology, but understanding them requires explanations from other vantage points.

 

Are Your Chemicals out of Balance?

Perhaps the most frequently cited evidence in support of the chemical imbalance hypothesis is the effectiveness of antidepressants, many of which increase the amounts of serotonin and other neurotransmitters at synapses. Zoloft, Prozac and similar selective serotonin reuptake inhibitors (SSRIs) result in such an increase and can often relieve depression, at least when it is severe. As a result, many believe that a deficiency in serotonin and other neurotransmitters causes the disorder. But just because a drug reduces symptoms of a disease does not mean that those symptoms were caused by a chemical problem the drug corrects. Aspirin alleviates headaches, but headaches are not caused by a deficiency of aspirin.

Evidence against the hypothesis comes from the efficacy of a newly developed antidepressant, Stablon (Tianeptine), which decreases levels of serotonin at synapses. Indeed, in different experiments, activation or blockage of certain serotonin receptors has improved or worsened depression symptoms in an unpredictable manner. A further challenge to the chemical imbalance hypothesis is that many depressed people are not helped by SSRIs. In a 2009 review article psychiatrist Michael Gitlin of the University of California, Los Angeles, reported that one third of those treated with antidepressants do not improve, and a significant proportion of the remainder get somewhat better but remain depressed. If antidepressants correct a chemical imbalance that underlies depression, all or most depressed people should get better after taking them. That they do not suggests that we have only barely begun to understand the disorder at a molecular level. As a result, we must consider other, nonchemical leads.

 

This Is Your Brain on Depression

A possible clue lies in brain structures. Imaging studies have revealed that certain brain areas differ in size between depressed and mentally healthy individuals. For example, the amygdala, which responds to the emotional significance of events, tends to be smaller in depressed people than in those without the disorder. Other emotional regulatory centres that appear to be reduced in volume are the hippocampus, an interior brain region involved in emotional memory, the anterior cingulate cortex, which helps to govern impulse control and empathy, and certain sections of the prefrontal cortex, which plays an important role in emotional regulation. Nevertheless, the effects of these shrinkages on depression, if any, remain an open question.

Neuroimaging studies have revealed that the amygdala, hypothalamus and anterior cingulate cortex are often less active in depressed people. Some parts of the prefrontal cortex also show diminished activity, whereas other regions display the opposite pattern. The subcallosal cingulate gyrus, a region near the anterior cingulate, often shows abnormal activity levels in depressed individuals. These differences may contribute to depression, but if they do, scientists are not sure how.

In 2012 neurosurgeon Andres Lozano of the University of Toronto and his associates studied the effects of deep brain stimulation of the subcallosal cingulate gyrus in depressed patients who had not benefited from standard treatments. The intervention led to a significant reduction in symptoms of depression, supporting the idea that a dysfunction in this brain area may be involved in the illness.

Findings also point to a crucial role for psychosocial factors such as stress, especially when it arises from a loss of someone close to you or a failure to meet a major life goal. When someone is under a good deal of stress, a hormone called cortisol is released into the bloodstream by the adrenal glands. Over the short term, cortisol helps humans cope with dangers by mobilising energy stores for flight or fight. But chronically high cortisol levels can harm some bodily systems. For example, at least in animals, excess cortisol reduces the volume of the hippocampus, which in turn may contribute to depression. Despite such data, we still do not know if stress alters the human brain in ways that can lead to depression.

 

Seeing the Elephant

Throughout this article, associations are described between various brain changes and depression. The “causes” are not addressed because no studies have established a cause-and-effect relation between any brain or psychosocial dysfunction and the disorder. In addition, depression almost certainly does not result from just one change in the brain or environmental factor. A fcus on one piece of the depression puzzle—be it brain chemistry, neural networks or stress—is shortsighted.

The tunnel-vision approach is reminiscent of a classic story in which a group of blind men touch an elephant to learn what the animal looks like. Each one feels a different part, such as the trunk or the tusk. The men then compare notes and learn that they are in complete disagreement about the animal’s appearance. To understand the causes of depression, we have to see the entire elephant—that is, we must integrate what we know at multiple scales, from molecules to the mind to the world we live in. So….to be continued…..!

 

References:

  • The “Chemical Imbalance” Explanation for Depression: Origins, Lay Endorsement, and Clini- cal Implications. Christopher M. France et al. in Professional Psychology: Research and Prac- tice, Vol. 38, No. 4, pages 411–420; August 2007.
  • A Multicenter Pilot Study of Subcallosal Cingulate Area Deep Brain Stimulation for Treatment-Resistant Depression. Andres M. Lozano et al. in Journal of Neurosurgery, Vol. 116, No. 2, pages 315–322; February 2012.
  • Facts & Fictions in Mental Health. Hal Arkowitz & Scott O. Lilienfeld. Scientific American Mind, Vol. 25, No. 2, pages 66-67.

Why Standing Out From the Crowd Is Good

Posted Posted in Jayne's blog

In spite of The Netherlands being an equal opportunities and very tolerant nation, the Dutch themselves have a saying about ‘not sticking your head above the cornfield’ because it will get chopped off. In English we talk about ‘sticking our necks out’. In both cases, standing out in some way seems to have definite disadvantages (usually involving loss of both head and life!). In looking into the research about uniqueness, it would seem that standing out from the crowd is a good thing since it shapes unique behaviour and creative thinking.

 

Who am I?

The question seems so simple, yet it cuts to the heart of everything we do. Without an answer, we lack the inner compass that guides us through life. Decisions become arbitrary. Relationships dangle by a tenuous thread.

Introspection offers partial insight into this nebulous yet vital question. A fuller account, however, emerges from our interactions with the social environment. As we move through the world, certain people, ideas and activities resonate more than others. This mix of allegiances is ultimately what makes you you.

A defining force in the shaping of identity is a person’s drive to be different and special. Psychologists define this facet of personality as the need for uniqueness. Their research has revealed that every one of us seeks uniqueness to some degree. Those who have little need for uniqueness tend to find comfort in familiarity. Others strive to be extreme outliers. Most of us fall somewhere in between.

Even for the most exotic among us, the need for uniqueness is counterbalanced by a desire to fit in. Consider, for example, the hypothetical case of a very successful businesswoman with a thoroughly pierced face and a Mohawk. Most likely she feels very much at home around others with a similar look. In a corporate boardroom, however, she probably feels ill at ease. The reason is context: in the first case, she surrounds herself with like-minded people, a group to which she feels she belongs. Because these two social circles—those who embrace a punk aesthetic and those who sit in boardrooms— rarely overlap, we almost never encounter such edgy executives. Herein lie the yin and yang of uniqueness: somewhat paradoxically, we set ourselves apart by affiliating with groups of people more like us. Uniqueness emerges from the distinct combination of alliances that only you seek out.

The natural drive to be unique has broad effects. It informs purchasing decisions. It affects appearance, for example, through hairstyles and tattoos. And it is an important driver of innovation. Many major discoveries emerged from the minds of scientific outsiders. Think of Albert Einstein, the patent office clerk who chafed under the strictures of academia but thrived once he could pursue his interests in autonomy. Or consider Marie Curie, the first woman to achieve numerous accomplishments in science, culminating in two Nobel Prizes. Had she conformed to the social expectations for her gender, the world would have been deprived of her many contributions. In short, uniqueness enhances creativity. So let your true self shine through—the world might thank you for it.

 

Fitting in vs. Sticking Out

The idea of a need for uniqueness has a long history in psychology, originating with the study of its counterpart, conformity. Psychologist Solomon Asch attained renown in the 1950s for demonstrating that a person’s views are vulnerable to the opinions of the majority. In his now classic experiment, a participant sat in a room with several other people, all of whom had been secretly hired by Asch and his colleagues. The task was to look at a line and then pick which one of three other lines most closely resembled the initial prompt. Given the way the task was designed, identifying the proper line ought to have been exceedingly simple.

But the experimenters set up the situation so that the actors they had hired all responded before the real participant, and they all gave the same wrong answer. When the participants’ turn came around, about a third responded just as the actors did—an astounding fraction, given that the correct choice was crystal clear. Later, when they were asked why they gave the wrong answer, the subjects recalled the uncertainty they had felt at the time. Although they had initially arrived at the proper response, they began to doubt themselves and concluded that the group was probably right.

Variations on Asch’s initial study revealed that factors such as the size of the group, the presence of a dissenter or two, and the group’s overall status could alter how many participants ultimately go against the grain. Nevertheless, as Asch concluded, “that we have found the tendency to conformity in our society so strong that reasonably intelligent and well-meaning young people are willing to call white black is a matter of concern. It raises questions about our ways of education and the values that guide our conduct.”

The matter of why and when people strike out on their own captured the interest of two other psychologists, Howard Fromkin, and his colleague Charles Snyder. In the 1970s they developed a theory that everyone craves uniqueness to some extent. They discovered that relatively simple questions can gauge the intensity of this need in a person, and so they devised a uniqueness scale. In it, respondents rate how strongly certain statements apply to them, such as “I tend to express my opinions openly, regardless of what others say,” “I like to go my own way,” and “I always try to live according to the rules and standards of society.”

Using Fromkin and Snyder’s scale, Erb and his colleagues looked at how the need for uniqueness mapped to the “big five” personality traits, the basic human characteristics recognised by most psychologists. (The five traits are extroversion, openness to experience, neuroticism, agreeableness and conscientiousness.) In a survey of approximately 150 students, it was found that three of these traits are closely connected with the need for uniqueness. Individuals with a strong need for uniqueness tend on average to be extroverted. They are sociable and optimistic about life. They also tend to be open to new experiences. In addition, a pronounced need for uniqueness is associated with low neuroticism; such people generally are more satisfied with their life and have fewer mood fluctuations.

Despite their convivial nature, people who are high in their need for uniqueness also tend to care less about others’ opinions, and they typically engage in creative activities more frequently than their mainstream counterparts. The other two dimensions of the big five, agreeableness and conscientiousness, do not appear to be linked with either a strong or weak need for uniqueness.

 

Manipulating Uniqueness

Although a person’s propensity to seek uniqueness is generally stable throughout life, certain situations can shift it temporarily. In 2009 a study conducted by psychologists Imhoff and Erb wanted to investigate how making someone feel average might affect his or her subsequent behaviour. To do so, subjects were asked to fill out a personality test. They were then given bogus feedback—half the participants were told they had very pronounced individual traits, at the same time the other half learned that their personality was simply normal.

Next they were asked how they felt about a debate regarding restaurant-buffet cars on trains. To test whether the personality test results altered their desire to stand out in the crowd, they were shown a chart that claimed that either 79 or 21 percent of respondents believed that restaurant-buffet cars should be dropped from German Federal Railway trains.

The researchers discovered that the subjects who had been told they were average were much more likely to opt for the minority opinion. In contrast, those who had been told they had notably unique traits tended to agree with the majority. This was interpreted as meaning that the people who had been led to believe they were unremarkable had felt that their individuality was threatened and thus offered a dissenting opinion as a way to differentiate themselves. People will express their individuality even in something as mundane as a debate on German restaurant-buffet cars.

The realisation that the desire to both fit in and stick out can drive decision making has not been lost on retailers and product designers. People who wish to be seen as tough, for example, are more likely to sport a leather jacket. To come across as shrewd in business, a person might acquire a custom-tailored suit. These behaviours may seem commonsense, but the underlying motivation is to signal an individual’s inner self to the outer world.

To understand how this need motivates consumer behaviour, consider a study published in 2012 by Ph.D student Cindy Chan and her colleagues. They examined how purchasing decisions reflect a person’s attempts to juggle identifying with a social group and maintaining individuality.

Chan and her co-workers suspected that consumers satisfy their competing motives on different dimensions of a given product. To test this idea, the researchers recruited college students who belonged to one of their university’s eating clubs. Similar to fraternities, the eating clubs differ in their social identities, with one club attracting athletes, another drawing science and engineering students, and so on. The researchers took pictures of participants from three clubs and blurred the images so only the clothing remained visible. The students also filled out a questionnaire to measure their need for uniqueness.

Then a group of students drawn from those same three clubs viewed the photographs and guessed the subject’s club. They also rated the uniqueness of that person’s look as compared with others in his or her club.

As it turned out, the observers were good at their jobs. They were highly accurate when identifying a subject’s club from his or her clothing in the photographs. They likewise guessed correctly which students had higher or lower needs for uniqueness. The finding suggests two things: that our taste in clothing broadcasts our identity to the people around us and that we can signal group membership and uniqueness simultaneously through choices of clothing.

But these results do not yet tell us how a person’s choices can accomplish these two goals. Thus, in a set of follow-up experiments, Chan and her collaborators manipulated whether a participant felt like he or she was an insider or an outsider. They did so by asking their subjects to write about a group that they either did or did not feel a part of, such as an athletic team, a fraternity or a student council. As before, they also measured their participants’ need for uniqueness.

Then they examined the participants’ purchasing preferences. Similar to the setup of the German restaurant-buffet car experiment, these researchers showed subjects a set of products, revealed the preference of the group they had described, and asked them what they would choose. But the decision scenarios were multidimensional. For example, participants might choose not only between a BMW and a Mercedes but also between colours or models of the respective brands.

Those who had been made to feel like outsiders did not reveal any preferences. After all, they were not motivated to try to either join or reject the social group they had been thinking about. The participants who felt like insiders, however, were significantly more likely to select the brand that their group had opted for. They successfully communicated their membership in that social circle. But the insiders who ranked higher in the need for uniqueness did not follow the majority all the way. The desire to separate oneself from the herd exerted its influence not in the brand but at the level of the product, through the choice of a model or colour. People do not simply assimilate or differentiate—they can do both simultaneously along different dimensions of a decision.

 

A Matter of Culture

Not only do individuals differ from one another in their need for uniqueness, entire cultures do as well. The most striking, well-supported divide between the cultures of the world is that of individualism versus collectivism. Individualist cultures emphasise personal freedom and reward achievements that make a person stand out. The U.S., the U.K. and the Netherlands are prime examples.

Collectivism emphasises community cohesiveness. These cultures—think Pakistan, Nigeria and Peru, as well as many countries in Asia— encourage members to strive toward shared goals. In a collectivist society, uniqueness has negative connotations, akin to deviance, whereas conformity is linked with harmony. It is a small step to translate these differing cultural priorities into divergent needs for uniqueness. In a study that compared the need for uniqueness of Malaysians and Americans, for example, researchers found considerably lower scores among the former.

In one 1999 experiment that explored the effects of cultural attitudes toward uniqueness, psychologists Heejung Kim and Hazel recruited Americans and East Asians from the waiting areas at San Francisco International Airport. To disguise the true purpose of the study, the participants were asked fill out a short survey in exchange for a free pen. On completion, the experimenter reached into a bag and pulled out five green or orange pens such that one or two of the pens were always a different colour from the rest. Which colour a person selected was the real test. As it turned out, Americans opted for the more rare choice. They chose a pen of the minority colour three times out of four, whereas only one in four East Asians chose the less common colour.

Given the pronounced effect they saw, Kim and Markus wondered whether advertisers emphasise cultural themes in their efforts to entice buyers. In a survey of almost 300 advertisements, they found that Korean ads were twice as likely to highlight conformity than uniqueness, and American advertisers more commonly underscored how a product makes someone stand out.

If a need for uniqueness is linked with creativity, then a culture’s orientation toward individualism could enhance that society’s overall innovativeness. At the same time, the every-man-for-himself mentality that accompanies individualism could undercut a culture’s ability to capitalise on its inventive thinking. Aligning a team’s members toward a common goal—an easy task in a collectivist group—might be significantly harder to achieve.

To investigate this question, economists Yuriy Gorodnichenko and Gérard Roland compared data across countries and found strong positive correlations between a country’s individualism and its measures of innovation. They also noted in their study, published in 2010, that increasing individualism enhanced a country’s standard of living considerably. Thus, an increase in individualism of one standard deviation—say, from Venezuela to Greece or Brazil to Luxembourg—was linked with a 60 to 87 percent increase in income. This trend suggests that, one way or another, countries of independent thinkers find a way to rally others to bring their ideas to life.

Contemporary Western society can sometimes seem to take uniqueness to its logical extreme: people pursue personal goals, advance individual careers and strive for independence from others. Yet it is important to remember that humans evolved as a group-living species. Over the course of evolution human adaptations have been such that a person is unlikely to survive without the aid of others. Shared resources, mutual protection and division of labour are all major advantages of belonging to a group.

It is clear that two opposing forces are at work in shaping a person’s identity—a need for uniqueness and a desire to assimilate. For any one of us, the identity we settle on satisfies both constraints. But keep this in mind as you go through the rest of your day: it is only by standing out that a person can be outstanding.

 

References:

Abnormality as a Positive Characteristic: The Development and Validation of a Scale Measuring Need for Uniqueness. Charles R. Snyder and Howard L. Fromkin in Journal of Abnormal Psychology, Vol. 86, No. 5, pages 518–527; October 1977.

What Motivates Nonconformity? Uniqueness Seeking Blocks Majority Influence. Roland Imhoff and Hans-Peter Erb in Personality and Social Psychology Bulletin, Vol. 35, No. 3, pages 309–320; March 2009.

Identifiable but Not Identical: Combining Social Identity and Uniqueness Motives in Choice. Cindy Chan, Jonah Berger and Leaf Van Boven in Journal of Consumer Research, Vol. 39, No. 3, pages 561–573; October 2012.

Uniquely You. Hans-Peter Erb and Susanne Gebert. Scientific American Mind, Vol. 35, No. 2, pages 26-33.

What a Headache!

Posted Posted in Jayne's blog

Halos, auras, flashes of light, pins and needles running down your arms, the sudden scent of sulphur—many symptoms of a migraine have vaguely mystical qualities, and experts remain puzzled by the debilitating headaches’ cause. Researchers at Harvard University, however, have come at least one step closer to figuring out why women are twice as likely to suffer from chronic migraines as men. The brain of a female migraineur looks so unlike the brain of a male migraineur, asserts Harvard scientist Nasim Maleki, that we should think of migraines in men and women as “different diseases altogether.”

Maleki is known for looking at pain and motor regions in the brain, which are known to be unusually excitable in migraine sufferers. In one notable study published in the journal Brain in 2012, she and her colleagues exposed male and female migraineurs to painful heat on the backs of their hands while imaging their brains with functional MRI. She found that the women had a greater response in areas of the brain associated with emotional processing, such as the amygdala, than did the men. Furthermore, she found that in these women, the posterior insula and the precuneus—areas of the brain responsible for motor processing, pain perception and visuospatial imagery—were significantly thicker and more connected to each other than in male migraineurs or in those without migraines.

In Maleki’s most recent work, presented in June at the International Headache Congress, her team imaged the brains of migraineurs and healthy people between the ages of 20 and 65, and it made a discovery that she characterises as “very, very weird.” In women with chronic migraines, the posterior insula does not seem to thin with age, as it does for everyone else, including male migraineurs and people who do not have migraines. The region starts thick and stays thick.

It is not yet known whether the thickening of the insula is something the brain is doing to protect itself or something that worsens women’s migraines. Yet the evidence is mounting that when it comes to migraines, men’s and women’s brains are structurally and functionally different. For treatment, that knowledge could make a huge impact: not only should researchers be better about testing potential migraine drugs on men and women separately, Maleki says, but they may be able to design new treatments based on these brain differences—giving both sexes a better chance at relief.

Migraines can be debilitating, and most people know somebody who regularily suffers from them. Here are the top (scientifically proven) triggers:

■ Overuse of painkillers

■ Foods:

    • Processed, fermented, pickled or marinated foods
    • Chocolate, nuts, peanut butter and dairy products
    • Foods containing tyramine: red wine, aged cheese, smoked fish, chicken livers
    • Fruits and vegetables: avocadoes, bananas, citrus fruits, onions
    • Meats containing nitrates: bacon, hot dogs, salami, cured meats

■ Changes in hormones, such as menstruation

■ Sex

■ Atypical sleep patterns

 

Deciphering Cluster Headaches

Migraines are not the only culprits when it comes to extraordinary head pain. Cluster headaches have long puzzled researchers, too, although studies are slowly revealing the parts of the brain involved when those punctuated bursts of pain occur.

The excruciating headaches tend to turn up in bouts lasting six to eight weeks. During these cycles, afflicted individuals—more often men—experience intense daily headaches on one side of the head, each lasting an hour or two.

In the late 1990s Goadsby and his colleagues linked cluster headaches to heightened synaptic activity falling in or near the hypothalamus, a brain region that mediates hunger, thirst, sleep, sex drive and more. Yet researchers are still trying to understand how activity in this hypothalamus-adjacent area could conjure the condition—and to determine what other glitches in brain structure, metabolism or interactions contribute to sufferers’ throbbing heads.

At least one study suggests that in cluster headache sufferers this hypothalamus-adjoining region may differ not only in its electrical activity but also in its interactions with other parts of the brain. In February 2013, a Beijing-based team imaged the brains of a dozen men in the midst of cluster headache bouts. The researchers traced blood flow—and, with it, functional connections—between the hypothalamus and other parts of the brain. Compared with unaffected men, the cluster headache sufferers did have unusual hypothalamic connections. When headaches hit, these altered interactions often involved parts of the brain associated with pain processing. But hypothalamic connections were off-kilter between headaches, too, pointing to more persistent brain differences in those prone to cluster headaches.

How Lust and Love Work Together

Posted Posted in Jayne's blog

People often think of love and lust as polar opposites—love as the binder of two souls, lust the transient devil on our shoulders, disturbing and disruptive. Now neuroscientists are discovering that lust and love work together more closely than we think. Indeed, the strongest relationships have elements of both.

The divided treatment of love and lust dates to antiquity. The study of love as an academic subject is nearly a century old, with the sentiment covered in introductory text­books of social psychology. Psychologists, primatologists, neuroanatomists and neurophysiologists came to see love— defined as an intense and complex feeling of deep affection— as responsible for long­term coupling and close relationships. The first psychological tools for measuring love appeared in the 1940s. In a review of the literature published in 2011, psychologist Elaine Hatfield identified 33 scales for measuring love’s gradations.

In contrast, researchers have traditionally regarded lust as little more than uncontrolled sexual urges. The scientific study of lust remained verboten or limited to clinicians, psychiatrists and sex therapists dealing with social and behavioural problems. When the topic of lust did appear in the scientific literature, it was cast as an archaic emotion, a sinful feeling that needed to be suppressed or denied lest it challenge societal order, or an addiction that hijacked human thought, emotion and behavior in insidious ways.

Now, though, neuroimaging investigations are beginning to flesh (pardon the pun) out the relationship between lust and love. Some research does support the Jekyll and Hyde dichotomy. Studies have revealed that lust and love both have unique brain signatures, suggesting they are separable, with the brain able to generate lust in the absence of love and vice versa. In one study of 500 individuals conducted in the mid­1960s by psychologist Dorothy Tennov, 53 percent of the women and 79 percent of the men agreed with the statement, “I have been sexually attracted without feeling the slightest trace of love”; 61 percent of the women and 35 percent of the men agreed with the statement, “I have been in love without feeling any need for sex.” Neuroimaging studies have also shown considerable overlap between the network for lust and the network underlying addiction, suggesting that the craving associated with lust brings with it impulsivity, lack of self­control and risk taking.

Other studies reveal a more complex and synergistic connection between lust and love. Both feelings can activate regions in the brain related to emotions, including euphoria, reward, motivation, addiction and body image. What is more, lust and love activate different parts of the same brain structures, the insula and the striatum (in the pictures below love=red and lust=purple).

A recent meta­-analysis that was conducted of 20 studies with a total of 429 participants revealed that the posterior region of the insula activates more for lust than love and the anterior region of the insula activates more for love than lust. This back­to­front distinction is in line with a broader principle of brain organisation: posterior regions are involved in current, concrete sensations, feelings and responses, and anterior regions are involved in the integration of abstract concepts ranging from the distant past to alternative futures. In this model, lust would be grounded in particular sensory and motor experiences, with love as a more abstract, future­oriented gloss on those experiences with another person.

Studies show that as lust progresses to love, activity cascades from the back of the insula to the front, with the pleasing sensations of lust (sparked at the back) joined by the abstract feelings of affection (triggered at the front). A similar pattern for lust and love emerges in the striatum, this time travelling from bottom to top.

The research suggests that the strongest relationship – passionate love – involves activation of the home bases of both love and lust. Passionate love builds on the neural circuitry for lust, adding regions associated with reward expectancy, habit formation, and abstract representation and control to those associated with rewards for sensations and the satisfaction of cravings.

For any two individuals, the strongest relationship is not necessarily the best outcome: some couplings are just meant to be one­night stands. Love and lust can exist in any combination, with either, both or neither emotion present, and present to any degree. The combinations result in a variety of affiliations. When both people feel the same emotions, the relationship can range from passionate love (high love, high lust) to acquaintanceship (a little of each), with one­night stands (high lust, little love) and companionate love (as in a friendly marriage) in the middle. When the feelings of two people diverge, the results may be unwanted attention for one and unrequited love or lust for the other. The ideal state in any pairing is when the two people agree on their love­lust formula, creating a healthy balance between love and desire and the best chance for a stable, satisfying, monogamous relationship. But whatever the end point, getting there is half the fun!

 

References:

◆Love and Sex: Cross-Cultural Perspectives. Elaine Hatfield and Richard L. Rapson. Reprint edition. University Press of America, 2005.

◆The Common Neural Bases between Sexual Desire and Love: A Multilevel Kernel Density fMRI Analysis. Stephanie Cacioppo et al. in Journal of Sexual Medicine, Vol. 9, No. 4, pages 1048–1054; April 2012.

◆Lust for Life. Stephanie Cacioppo and John T. Cacioppo. Scientific American Mind, Vol. 24, No.5, pages 56-60; November/December 2013.

Harness the Power of Language

Posted Posted in Jayne's blog

I have mentioned here before my love of writing and languages. In the last weeks, several articles have caught my attention around not only the healing power of writing but also how our choice of words & metaphors can influence ourselves and others. In the vein of wanting to wish you all manner of good things for 2014 I thought I could not do any better than to put words to the research….

 

Metaphors in Our Surroundings Can Trigger Thinking and Behaviour

During the last years I have taken myself out for a walk in the park daily if I can, but if not, then several times a week.

What started as a need to get me out from behind my computer and into the daylight for fresh air during an intense scientific research project, turned out to be one of the best daily habits I’ve ever made. It also transpired that I had a vitamin D deficiency, and so the daily boosts of sunlight were exactly what the doctor order, literally and metaphorically speaking.

As you look around you now, in this moment, what do you see? Do you see four walls or an expansive vista? The answer could influence your ability to think creatively. A growing body of research suggests that our sensory experiences can trigger metaphorical thinking, influencing our insights and behaviour without us even realising it. New research reveals ways we might be able to harness these subconscious forces.

Consider, for example, the metaphorical idea that the heart is warm and emotional and the head is cool and rational. In a study in August 2013 in the Journal of Personality and Social Psychology, researchers led their subjects to believe they were investigating how people answer questions when using their nondominant hand. To ensure they did not use their dominant hand, the participants were instructed to place their dominant index (pointer) finger either on their temple or on the left side of their chest. Participants who pointed at their head answered test questions more accurately, and those who pointed at their heart were more likely to let emotions sway their decisions in a moral dilemma. The finding adds to a rapidly growing list of metaphor effects: past studies have found that seeing forward motion can propel us to “move forward” in a metaphorical sense and that feeling smooth textures makes a difficult social interaction feel easier (or go more “smoothly”).

In all these studies, the influence of the embodied metaphors evaded conscious awareness—the study subjects did not notice the connection between their sensations and their subsequent decisions or feelings. Yet researchers think we might be able to use this effect by altering our surroundings and habits, such as choosing office art that evokes forward motion. If you are actively touching an object with the expectation that it will change your view of a situation, it might not work right away,” explains Joshua Ackerman, a psychologist at the Massachusetts Institute of Tech- nology and a co-author of the smoothness study. “But if you make such be- havior a habit, you will gradually stop thinking about the connection, and it will then have a stronger effect.”

In a similar vein, freeing yourself from perceived constraints may indeed facilitate “thinking outside the box.” In a series of experiments published in May 2012 in Psychological Science, scientists tested participants’ creative thinking while they literally sat inside or outside a cardboard box. Other participants either walked freely or along the path of a rectangle. Subjects who were outside the box in either sense scored higher on standard measures of creative thinking. You can imagine that this means you might be able to encourage your own creativity by eliminating constraints to movement, such as by strdiing around around a room or wandering through a park. The key is variety and spontaneity: if you want to be more creative, run freely outside and do it randomly for the day. Get away from your typical route, time of day, music or even your pace.

In any situation, consider your surroundings, sensory perceptions and actions—they might be influencing your thought process via the subtle metaphors embedded in daily life.

 

Figurative Speech Sways Decisions

When pondering a decision or trying to convince others, think carefully about your metaphors. The implicit information may subtly influence decision making.

A study published very recently examined how reading different metaphors—“crime is a virus” and “crime is a beast”—affected participants’ reasoning when choosing solutions to a city’s crime problem. Those who read the beast metaphor were more likely to opt for a direct approach emphasising enforcement, whereas the virus metaphor elicited a preference for a systemic, reform-focused solution. A follow-up survey indicated that many participants did not remember the metaphor they read, and none thought a metaphor could have influenced their reasoning.

People don’t seem to consciously ponder the ways in which crime is like a virus or beast. Instead metaphors subtly structure the way the person understands the issue being described.

Previous brain-imaging research has shown that interpreting metaphors requires a variety of areas on both sides of the brain, compared with literal language, which is processed in known language areas in the left hemisphere.

Scientists do not yet know how exactly this pattern affects reasoning, but they suspect that the brain triggers related concepts when processing a metaphor’s meaning. So it is perhaps worth giving more thought to the metaphors you use and hear, especially when the stakes are high. Ask in what ways does this metaphor seem appropriate and in what ways does this metaphor mislead. Our decisions may become sounder as a result!

 

Expressive Writing May Lead to Faster Recovery From Injury

Expressive writing is known to help ease psychological trauma and improve mood. Now studies suggest that such writing, characterised by descriptions of one’s deepest thoughts and feelings, also benefits physical health.

In a study published in July last year in Psychosomatic Medicine researchers in New Zealand investigated whether expressive writing could help older adults heal faster after a medically necessary biopsy. In the study, 49 healthy adults aged 64 to 97 years wrote about either upsetting events or daily activities for 20 minutes, three days in a row. After a time lag of two weeks, to make sure any initial negative feelings stirred up by recalling upsetting events had passed, all the subjects had a biopsy on the arm, and photographs over the next 21 days tracked its healing. On the 11th day, 76 percent of the group that did expressive writing had fully healed as compared with 42 percent of the control group.

The researchers concluded that writing about distressing events helped participants make sense of the events and reduce distress. Long-term emotional upset can increase the body’s levels of stress hormones such as cortisol, which impedes the immune system. A paper in September 2013 in the British Journal of Health Psychology indeed found that writing about an emotional topic lowered participants’ cortisol levels.

The writing in the New Zealand study may have also sped recovery by improving sleep. Participants who slept more in the week before the biopsy healed faster, perhaps because sleep increaes many bodily processes involved in healing.

 

A Change of Perspective Can Offer Solace

If a past ordeal continues to trouble you, try writing about it as if it happened to somebody else: “She crashed the car,” rather than “I crashed the car.” In a study that appeared in February 2013 in Stress and Health, doing so led to greater health gains for participants who struggled with trauma-related intrusive thinking, as measured by the number of days their normal activities were restricted by any kind of illness. It would seem that third-person expressive writing might provide a constructive opportunity to make sense of what happened but from a safe distance that feels less immediate and threatening.

This is interesting in that there are several healing modalities that use techniques in which, for example, you re-envisage the trauma as though it were happening on a television screen, thus creating distance.

I love it when science catches up with the alternative & natural medicine world!!