Being a Couch Potato May Change Your Personality

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JULY 2018 NEWSLETTER including this month’s Freebie.

Have you heard?

Sitting is the new smoking.

A sedentary lifestyle has long been linked to poor health, and a growing body of evidence suggests it may also affect personality. Previous research found associations between a lack of exercise and declines in character traits such as conscientiousness, measured four to 10 years after initial surveys. Now the largest analysis of its kind to date has used longer follow-up periods to confirm these links and show they persist up to nearly two decades.

A team led by psychologist Yannick Stephan of the University of Montpellier in France reached this conclusion after combining data from two large, survey-based studies. The Wisconsin Longitudinal Study (WLS) followed people who had graduated from that state’s high schools in 1957, as well as some of their brothers and sisters. The Midlife in the United States (MIDUS) study recruited people from across the country. Participants in both had completed personality questionnaires when first recruited in the 1990s and answered questions about their exercise habits and health.

Nearly 20 years later a total of about 9,000 people took the same surveys again. Stephan and his team found that subjects who reported being less active had greater reductions on average in conscientiousness, openness, agreeableness and extroversion—four of the so-called Big Five personality traits—even after accounting for differences in baseline personality and health. No link was found with the fifth trait, neuroticism. The changes in traits were small, but the link with exercise was relatively strong. Physical activity predicted personality change better than disease burden did, for example. The findings were published in April this year in the Journal of Research in Personality.

Numerous mechanisms may be involved—from physiological factors such as stress response to changes in physical ability that can affect how much people socialise. Personality is, in part, what behaviours we repeatedly do, and changes in habits can consolidate into changes in personality.

Correlations do not prove causation, however. Additional factors, such as genetics or earlier life events, might be affecting both exercise levels and personality. The findings also need to be replicated in samples from different cultures and in studies using objective measures of an active lifestyle.

Nevertheless, the new analysis underscores the idea that personality is changeable and malleable throughout life. It also tallies with studies suggesting personality is linked to health. These findings further emphasise the need for physical activity your entire life!

And with that, I’m going to go out for a walk….

Does Parkinson’s Begin in the Gut?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JUNE 2018 NEWSLETTER including this month’s Freebie.

The earliest evidence that the gut might be involved in Parkinson’s emerged more than 200 years ago. In 1817, the English surgeon James Parkinson reported that some patients with a condition he termed “shaking palsy” experienced constipation. In one of the six cases he described, the Parkinson-movement-related problems associated with the disease improved by treating the gastrointestinal complaints.

Since then, doctors have noted that constipation is one of the most common symptoms of Parkinson’s, appearing in approximately half the individuals diagnosed with the condition and often preceding the onset of movement-related impairments. Still, for many decades, the research into the disease has focused on the brain. Scientists initially concentrated on the loss of neurons producing dopamine, a molecule involved in many functions including movement. More recently, they have also focused on the aggregation of alpha synuclein, a protein that twists into an weird shape in Parkinson’s patients. A shift came in 2003, when Heiko Braak, a neuroanatomist at the University of Ulm in Germany, and his colleagues proposed that Parkinson’s may actually originate in the gut rather than the brain.

Braak’s theory was grounded in the observation that in post-mortem samples of Parkinson’s patients, Lewy bodies (that means clumps of alpha synuclein), appeared in both the brain and the gastrointestinal nervous system that controls the functioning of the gut. The work by Braak and his colleagues also suggested that the pathological changes in patients typically developed in predictable stages that starts in the gut and ends in the brain. At the time, the researchers speculated that this process was linked to a “yet unidentified pathogen” that travels through the vagus nerve—a bundle of fibres connecting major bodily organs to the brainstem, which joins the spinal cord to the brain.

The idea that the earliest stages of Parkinson’s disease may occur in the gastrointestinal tract has been gaining movement (pardon the pun). A growing body of evidence supports this hypothesis, but the question of how changes in the intestines drive neurodegeneration in the brain remains an active area of investigation. Some studies propose that aggregates of alpha synuclein move from the intestines to the brain through the vagus nerve. Others suggest that molecules such as bacterial breakdown products stimulate activity along this channel, or that that the gut influences the brain through other mechanisms, such as inflammation. Together, however, these findings add to the growing consensus that even if Parkinson’s is very much driven by brain abnormalities, it doesn’t mean that the process starts in the brain.

The Gut-Brain Highway

The vagus nerve, the bundle of fibres that originates in the brain stem and innervates major organs, including the gut, may be the primary route through which pathological triggers of Parkinson’s travel from the gastrointestinal tract to the brain. Recent examinations of patients whose vagus nerves were severed show that they have a lower risk of developing Parkinson’s. Researchers have also demonstrated that alpha-synuclein fibres, injected into the gastrointestinal tracts of rodents, can travel through the vagus into the brain.

If alpha-synuclein does travel from the intestines to the brain, the question still arises: Why does the protein accumulate in the gut in the first place? One possibility is that alpha-synuclein produced in the gastrointestinal nervous system helps fight off pathogens. Last year, Michael Zasloff, a professor at Georgetown University, and his colleagues reported that the protein appeared in the guts of otherwise healthy children after norovirus infections, and that, at least in a lab dish, alpha-synuclein could attract and activate immune cells.

Microbes themselves are another potential trigger for promoting the build-up of intestinal alpha-synuclein. Researchers have found that, in mice, bacterial proteins could trigger the aggregation of the alpha-synuclein in the gut and the brain. Some proteins made by bacteria may form small, tough fibres, whose shape could cause nearby proteins to misfold and aggregate in a manner similar to the prions responsible for mad cow disease.

The microbiome, the totality of microorganisms in the human body, has spurred intense interest among Parkinson’s researchers. A number of reports have noted that individuals with the disease harbour a unique composition of gut microbes, and scientists have also found that transplanting fecal (poo!) microbes from patients into rodents predisposed to develop Parkinson’s can worsen motor symptoms of the disease and increase alpha-synuclein aggregation in the brain.

But rather than bacterial proteins triggering misfolding, Sarkis Mazmanian, a Caltech microbiologist, believes that these microbes could be acting through the metabolites they produce, such as short-chain fatty acids. Mouse experiments from his lab have shown that these molecules appear to activate microglia, the immune cells of the brain. The metabolites, Mazmanian adds, may send a signal through the vagus nerve or bypass it completely through another pathway such as the bloodstream. Because studies find that vagus nerve removal does not completely eliminate the risk of Parkinson’s, other brain-gut routes may also be involved.

A Role for Inflammation?

Yet another idea holds that that intestinal inflammation, possibly from gut microbes, could give rise to Parkinson’s disease. The latest evidence supporting this idea comes from a large study, in which Inga Peter, a genetic epidemiologist at the Icahn School of Medicine at Mount Sinai, and her colleagues scanned through two large U.S. medical databases to investigate the overlap between inflammatory bowel diseases and Parkinson’s.

Their analysis compared 144,018 individuals with Crohn’s or ulcerative colitis and 720,090 healthy controls. It revealed that the prevalence of Parkinson’s was 28 percent higher in individuals with the inflammatory bowel diseases than in those in the control group, supporting earlier findings from the same researchers that the two disorders share genetic links. In addition, the research team discovered that in people who received drugs used to reduce inflammation—tumour necrosis factor (TNF) inhibitors—the incidence of the neurodegenerative disease dropped 78 percent.

This study further validates the theory that gut inflammation could drive Parkinson’s development. The anti-TNF finding in particular, suggests that the overlap between the two diseases might be primarily mediated by inflammation.

Intestinal inflammation might give rise to Parkinson’s in several ways. One possibility is that a chronically inflamed gut might elevate alpha-synuclein levels locally—as Zasloff’s investigation in children suggests—or else it may give rise to inflammation throughout the body, which in itself could increase the permeability of the gut and blood-brain barriers. Or else it could increase circulating cytokines, molecules that that can promote inflammation. In addition, changes in the microbiome could also be influencing gut inflammation.

There are probably multiple pathways that lead the gut to the brain. For now, Inga Peter and her team are focused on determining whether the protective effect of anti-TNF compounds is due to the lowering of inflammation throughout the body, which could result from other conditions, or whether they only benefit individuals with bowel disorders. Peter plans to investigate the prevalence of Parkinson’s in other patients who take these drugs, such as those with psoriasis or rheumatoid arthritis.

Because not all Parkinson’s patients will have inflammatory bowel disorders, findings from the investigations into the co-occurrence of the two conditions might not generalise to everyone with the neurodegenerative disease. Still, these studies and many others that have emerged in recent years support the idea that the gut is involved in Parkinson’s is correct. If this turns out to be true in the long run then it may allow researchers to devise treatments that target the gut instead of the brain.

Already, some researchers have started to test such treatments. In 2015, Zasloff and his colleagues launched a company, Enterin, that is currently testing a compound that slows alpha-synuclein aggregation in the gut. Although the treatment is intended to reduce non-motor symptoms of Parkinson’s, such as constipation, the researchers hope that by targeting early gut pathology, they will be able to restore—or prevent—the disease’s effects on the central nervous system.

While many lines of evidence support the gut origins of Parkinson’s, the question of how early the gastrointestinal changes occur remains. In addition, other scientists have suggested that it is still possible that the disease begins elsewhere in the body. In fact, Braak and his colleagues also found Lewy bodies in the olfactory bulb, which led them to propose the nose as another potential place of initiation (Interestingly, from a natural medicine perspective, we’re taught that loss of the sense of smell is a very early warning sign for Parkinson’s). It could turn out to be that there are multiple sites of origin for Parkinson’s disease. For some people, it might be the gut, for others it might be the olfactory system—or it might just be something that occurs in the brain.

Researching all these new findings got me excited because if inflammation is a trigger or cause for diseases like Parkinson’s then an anti-inflammatory diet could be a clue that the researchers haven’t even clued into yet. This is a self-help action that people can easily do for themselves: lots of green vegetables, less gluten/dairy/alcohol, a good night’s sleep and stress reduction (for example, meditation). All that helps you poop better too! I think that Parkinson’s patients with their arsenal of drugs, deep brain stimulation implants and decreasingly quality of life would sign up for that in droves. What do you think?

Can the ‘Date Rape’ Drug Rapidly Relieve Depression?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE MAY 2018 NEWSLETTER including this month’s Freebie.

Ketamine has been called the biggest thing to happen to psychiatry in 50 years. The notorious party drug may act as an antidepressant by blocking neural bursts in a little-understood brain region that may drive depression. It improves symptoms in as little as 30 minutes, compared with weeks or even months for existing antidepressants, and is effective even for the roughly one third of patients with so-called treatment-resistant depression.

Although there are multiple theories, researchers do not quite know how ketamine combats depression. Now, new research has uncovered a mechanism that may, in part, explain ketamine’s antidepressant properties. Two studies recently published in Nature describe a distinctive pattern of neural activity that may drive depression in a region called the lateral habenula (LHb); ketamine, in turn, blocks this activity in depression-prone rats.

Originally licensed as an anesthetic in 1970, ketamine has since gained fame as a party drug for causing out-of-body experiences, hallucinations and other psychosis-like effects. Its antidepressant properties in humans were discovered almost 20 years ago. Ketamine does not directly influence the same chemical messengers as standard antidepressants such as serotonin, but rather works via interaction with another chemical, glutamate—not usually associated with mood but rather with brain plasticity. One prominent idea about how it alleviates depression is by promoting the growth of new neural connections. If this proves to be right, then Hailan Hu of Zhejiang University in China and her group may have identified multiple new lines of attack for treating a condition the World Health Organization calls the leading cause of disability worldwide.

Both new studies probe the workings of the LHb, a small, central brain region wedged between the stalk of the pineal gland and the thalamus that acts like the dark twin of the brain’s reward centres by processing unexpectedly unpleasant events. For example, if an animal has been trained to expect food when reaching the end of a maze and the reward is not there, the LHb activates, signalling a discrepancy between expectation and outcome. This has led to the LHb being dubbed the key part of a “disappointment circuit.” If the LHb is overactive, it could suppress rewards from normally pleasurable activities—a symptom known as anhedonia—leading to long-term apathy and hopelessness. Studies in animals suggest hyperactivity in the LHb contributes to depression, but the details have been murky.

The first study, led by neuroscientist Yan Yang, also at Zhejiang, discovered a distinctive pattern of rapid bursts in the LHb of rats that display depressionlike behaviours. More usual neural activity, where neurons fire at spaced intervals, was not related to depression, suggesting it is burst activity, rather than increased LHb activity per se, that is related to depression. Exactly why bursts are important is not clear, but the researchers think they may enhance communication with other regions. Imagine that it’s like a machine-gun shooting versus single shooting, so it carries information more efficiently to downstream brain areas. The team also provoked LHb neurons into burst firing using optogenetics, a technology that allows neurons to be activated with light. The results showed increased depressive behaviours, indicating the bursts actually cause depression rather than just occur alongside it.

The researchers stumbled on ketamine after they injected a drug that blocks NMDA receptors (for glutamate that, when activated, allow calcium to flood inside cells, causing them to fire) in the LHbs of depression-prone rats and saw strong antidepressant effects. Ketamine also blocks NMDA receptors, so the team repeated this with ketamine and again alleviated depression, within one hour. Infusion of ketamine into just one brain region was sufficient to cause rapid antidepressant effects (in rats). Studies of brain tissue samples showed that whereas ketamine silenced burst firing within minutes, the standard antidepressant fluoxetine hydrochloride, commonly known as Prozac, had no such effect at these timescales.

The second study, led by another Zhejiang neuroscientist Yihui Cui, looked at what might cause burst firing in depression. The researchers found a protein, Kir4.1, was present at higher levels in depressive rats. Kir4.1 is found in cells called astrocytes, which influence neuronal activity. The team showed this protein promotes burst firing in LHb neurons. Raising Kir4.1 levels increased depressionlike behaviours whereas blocking its function reduced them.

The studies do not reveal how burst firing influences depression but the researchers have a hypothesis. The LHb connects to parts of the limbic system—which processes emotion—as well as reward centres that signal using chemical messengers associated with pleasure and mood, like dopamine and serotonin. The LHb inhibits activity in these regions, so burst firing may more effectively put the brakes on systems that produce reward signals from pleasurable activities.

Among researchers not taking part in the work, not everyone agrees the story can be this simple, however. Neuroscientist Jonathan Roiser of University College London found in his research that the habenula was underactive in depressed patients, which is inconsistent with the Chinese data . But if these discrepancies can be resolved, studying the LHb is a promising path toward entirely new approaches to treating severe depression.

The new findings have several implications for treatment. Understanding how ketamine acts so quickly could provide greater insight into the core mechanisms of depression and help to develop next-generation ketamine-based treatments that do not have the same side effects as the drug itself, such as dissociation and bladder problems. Several pharmaceutical companies have been pursuing this goal, but knowing what it is about ketamine that produces the desirable effects could, in principle, aid these efforts.

Researchers are still studying ketamine’s long-term effects, safety and optimum doses in clinical trials. Currently, patients are administered ketamine via infusions in a hospital, which, combined with the side effects, makes it unwieldy. It would be very interesting if we ketamine’s rapid effects could be reproduced in a simple oral medication. Its most exciting benefit currently is in treating suicidal ideation (extensively thinking about suicide), for which there currently aren’t any fast-acting therapies. In itself this is an unmet clinical need that could save lives.

Why Everyone Is Insecure (and Why That’s Okay)

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE APRIL 2018 NEWSLETTER including this month’s Freebie.

We all know what it’s like to feel as insecure as an untethered tent in a force 10 gale.

We know we should ask that obvious question rattling around in our head during the meeting, but are afraid we’ll sound stupid.

We secretly are in love with the organic veg man at the market, but handing over that cucumber in his presence makes you blush profusely.

Call it social anxiety, self-doubt or inhibition. Whatever we call it, it’s insecurity, and it’s a universal part of the human condition.

This urge to hide starts with the perception that something is wrong with us—we’re awkward, annoying, boring, stupid, a big loser, incompetent or any of a million other not-good-enough traits. And we think unless we conceal our perceived flaw, it will become obvious to everyone, who will then judge and reject us.

The mental health profession has even codified insecurity: at some point in life, it is estimated that around 10-15 percent of us will cross the line into social anxiety disorder, meaning insecurity that gets in the way of living the life people want to live. We deliberately don’t go to the departmental dinner. We pass up promotions because they require public speaking. We turn down invitations because we suspect our friends are only including us out of pity.

Furthermore, nearly half of us—40 percent in fact—identify as shy, which is simply the everyday way of saying that insecurity roars to life in social situations where we fear our perceived flaws will be revealed.

And then we kick ourselves: “This is stupid!” “Why can’t I do this?” “What is wrong with me?” The answer: nothing. Social anxiety is a disorder precisely because our perceived fatal flaw is just that: a perception.

If it causes all this misery and hand-wringing, why did insecurity stick around through millennia of evolution? What use does it have? Why didn’t it fall away with our tails or get traded for opposable thumbs?

It turns out insecurity isn’t an oversight of evolution. In fact, it’s necessary: a healthy dose of self-doubt spurs us to monitor ourselves and our interactions. It prompts introspection and helps us identify how to get along better with our fellow humans. In short, we doubt ourselves in order to check ourselves. And those doubts buy us at least three traceable benefits.

First, the biggie: propagation. In 1984, developmental psychologist Cynthia Garcia Coll named the inborn tendency to withdraw from unfamiliar situations, people and environments behavioural inhibition. This is our degree of caution when faced with new people, places or events. And it’s not just found in toddlers clinging to their mother’s leg or cats hiding under the bed when company arrives. In any organism, from bacteria to fish to modern human beings, behavioural inhibition wires us to look before we leap. It’s designed to keep us safe and, ultimately, alive, which helps ensure our genes will make it to the next generation.

To further illustrate the importance of behavioural inhibition, let’s turn it on its head. What’s the opposite of insecurity? Total confidence? Complete fearlessness? At first, that sounds amazing. But be careful what you wish for. Only 1 percent of the population has achieved this dubious goal: psychopaths. Turns out a total lack of insecurity is actually a sign of things gone wrong.

A study by Niels Birbaumer and his team at the University of Tübingen put individuals with social anxiety disorder and criminal psychopaths through an MRI scanner. In those with social anxiety, they found the neural signature of a hair-trigger social smoke alarm: an overactive frontolimbic circuit. In psychopaths, they found the exact opposite: an underactive frontolimbic circuit. Additional studies have strengthened the idea that psychopathy and social anxiety lie at opposite ends of the spectrum.

Therefore, in addition to the evolutionary jackpot of reproduction, the second thing insecurity buys us is group harmony. A little insecurity in each of us maintains social cohesion rather than letting rampant psychopaths drag down the whole group. A group that maintains harmony avoids burning its finite time and energy on internal conflict. Over time, a harmonious group will outcompete those weighed down by infighting and power grabs. Indeed, playing well with others is a smarter evolutionary strategy for the group, not to mention all the individuals within it.

And we need a group. Unlike solitary species like tigers or bears, we’re social animals, wired to live together. In ancient times, banishment was the worst possible punishment. Being cut off from the group meant certain death, and in some species—chimps, lions and wolves—it still does.

So the third thing insecurity buys us is actual security. Even if online shopping delivery has supplanted our reliance on the group to hunt and gather food, we still need a group for community, belonging and plain old love. A healthy dose of insecurity allows us to get along and stay safely in the fold.

There’s more: Behavioural inhibition and social anxiety are a package deal. They often come bundled with valuable skills, like conscientiousness, high standards, a strong work ethic, an ability to remember individual faces, empathy and a tendency to work hard at getting along with fellow humans—a skill that’s never been more valuable than in today’s fractious, divided world.

Therefore, from nature’s perspective, it’s better to have an overactive social smoke detector. It’s better to ring a false alarm when there is no threat than to miss a real threat. False alarms are annoying, but it’s much better than the house burning down around us.

Let’s wrap it up with a big ribbon and take it home. Insecurity persists because it buys us more than it costs us: self-awareness, safety, group harmony, belonging and a much better life than that of a psychopath. Maybe the shrinking violets and wallflowers of the world are actually the foundation of our beautiful bouquet of humanity.

Happy April – and see you again in May!

Bad News for the Highly Intellingent

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE MARCH 2018 NEWSLETTER including this month’s Freebie.

There are advantages to being smart.

People who do well on standardised tests of intelligence—IQ tests—tend to be more successful in the classroom and the workplace. Although the reasons are not fully understood, they also tend to live longer, healthier lives, and are less likely to experience negative life events such as bankruptcy.

But….now there’s some bad news for people in the right tail of the IQ bell curve. In a study just published in the journal Intelligence, Pitzer College researcher Ruth Karpinski and her colleagues emailed a survey with questions about psychological and physiological disorders to members of Mensa. A “high IQ society,” Mensa requires that its members have an IQ in the top 2 percent. For most intelligence tests, this corresponds to an IQ of about 132 or higher. (The average IQ of the general population is 100.) The survey of Mensa’s highly intelligent members found that they were more likely to suffer from a range of serious disorders.

The survey covered mood disorders (depression, dysthymia and bipolar), anxiety disorders (generalised, social and obsessive-compulsive), attention-deficit hyperactivity disorder and autism. It also covered environmental allergies, asthma and autoimmune disorders. Respondents were asked to report whether they had ever been formally diagnosed with each disorder or suspected they suffered from it. With a return rate of nearly 75 percent, Karpinski and her colleagues compared the percentage of the 3,715 respondents who reported each disorder to the national average.

The biggest differences between the Mensa group and the general population were seen for mood disorders and anxiety disorders. More than a quarter (26.7 percent) of the sample reported that they had been formally diagnosed with a mood disorder, while 20 percent reported an anxiety disorder—far higher than the national averages of around 10 percent for each. The differences were smaller, but still statistically significant and practically meaningful, for most of the other disorders. The prevalence of environmental allergies was triple the national average (33 percent vs. 11 percent).

To explain their findings, Karpinski and her colleagues propose the hyper brain/hyper body theory. This theory holds that, for all of its advantages, being highly intelligent is associated with psychological and physiological “overexcitabilities,” or OEs. A concept introduced by the Polish psychiatrist and psychologist Kazimierz Dabrowski in the 1960s, an OE is an unusually intense reaction to an environmental threat or insult. This can include anything from a startling sound to confrontation with another person.

Psychological OEs include a heighted tendency to ruminate and worry, whereas physiological OEs arise from the body’s response to stress. According to the hyper brain/hyper body theory, these two types of OEs are more common in highly intelligent people and interact with each other in a “vicious cycle” to cause both psychological and physiological dysfunction. For example, a highly intelligent person may overanalyse a disapproving comment made by a boss, imagining negative outcomes that simply wouldn’t occur to someone less intelligent. That may trigger the body’s stress response, which may make the person even more anxious.

The results of this study must be interpreted cautiously because they are correlational. Showing that a disorder is more common in a sample of people with high IQs than in the general population doesn’t prove that high intelligence is the cause of the disorder. It’s also possible that people who join Mensa differ from other people in ways other than just IQ. For example, people preoccupied with intellectual pursuits may spend less time than the average person on physical exercise and social interaction, both of which have been shown to have broad benefits for psychological and physical health.

All the same, Karpinski and her colleagues’ findings set the stage for research that promises to shed new light on the link between intelligence and health. One possibility is that associations between intelligence and health outcomes reflect pleiotropy, which occurs when a gene influences seemingly unrelated traits. There is already some evidence to suggest that this is the case. In a 2015 study, Rosalind Arden and her colleagues concluded that the association between IQ and longevity is mostly explained by genetic factors.

From a practical standpoint, this research may ultimately lead to insights about how to improve people’s psychological and physical well-being. If overexcitabilities turn out to be the mechanism underlying the IQ-health relationship, then interventions aimed at curbing these sometimes maladaptive responses may help people lead happier, healthier lives.

Could A Sense of Purpose Give You a Better Night’s Sleep?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE FEBRUARY 2018 NEWSLETTER including this month’s Freebie.

Despite its importance for health and well-being, many adults find it difficult to consistently get enough sleep. Sleep disturbances are particularly common in older adults and involve a variety of problems including difficulties falling or staying asleep, interrupted breathing and restless leg syndrome. A person’s racial background can influence his or her likelihood of developing a sleep disorder, with a greater number of black Americans reporting sleep disturbances compared to white Americans.

Beyond its effects on health, not getting enough sleep can lead to car accidents, medical errors or other mistakes on the job. To encourage better sleep, the medical community encourages adults to engage in good “sleep hygiene” such as limiting or avoiding caffeine and nicotine, avoiding naps during the day, turning off electronics an hour before bed, exercising and practicing relaxation before bedtime. It is also well-known that mental health is closely linked to sleep; insomnia is more common in people suffering from depression or anxiety.

A recent study now raises the possibility that sleep could be affected by the degree to which someone feels like his or her life is purposeful or meaningful. Arlener Turner, Christine Smith and Jason Ong of the Northwestern University School of Medicine found that people who reported having a greater sense of purpose in life also reported getting better sleep—even when taking into consideration age, gender, race and level of education.

To establish this link, the researchers recruited a sample of 825 older Americans to participate in a study where they reported on their sense of purpose in life along with the quality of their sleep. The majority of these participants were female (77 percent), and slightly more than half were African-American (54 percent). The participants were, on average, 79 years old. A sense of purpose in life was measured using a survey where participants rated how much they agreed with each of 10 statements, such as “Some people wander aimlessly through life, but I am not one of them.” The results showed that participants who reported having a greater sense of purpose in life also reported higher quality sleep on a regular basis, as well as fewer symptoms of sleep disorders. Importantly, the researchers found that their findings held true for both the white Americans and black Americans who participated in the study.

It is important to emphasise that this study only looked at the association between a sense of purpose and better sleep—the findings cannot say for sure that having a greater sense of purpose causes one to sleep better. An alternative interpretation for the findings is that people who have a greater sense of purpose also tend to have better physical and mental health, which in turn explains their higher quality sleep. Another important limitation of the study is that the findings rely entirely on people’s self-reported sleep symptoms. The researchers did not bring participants into a lab and actually monitor the quality of their sleep. Therefore, it is possible that people with a higher sense of purpose simply remember getting better sleep compared to people who do not report experiencing a sense of purpose in life.

Despite these limitations, this study is the first to suggest any kind of strong link between purpose in life and sleep. Given how common sleep problems are, anything that may suggest new avenues for treatment is important to explore. Perhaps developing a sense of purpose in life could be as effective at improving sleep as following healthy habits, such as limiting coffee. In addition to promoting good sleep hygiene, doctors may end up recommending mindfulness practices or exploring one’s values as ways of helping older adults sleep better. Given how elusive a good night’s sleep has become for many, it’s well worth exploring. The impact of poor sleep goes far beyond our own personal health, as the side effects have the potential to wreak havoc on other people’s lives as well.

Developing a sense of purpose in life may simultaneously convey other benefits in addition to better sleep. Research has linked experiencing purpose in life to a variety of other positive outcomes including better brain functioning, reduced risk of heart attack and even a higher income. People with a greater sense of purpose in their life would surely be better off while also serving as a positive example in the lives of those they know.

Sounds good?

Sleep well, sweet dreams and see you in March.

Could Acidity be a Cause of Mental Health Issues?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JANUARY 2018 NEWSLETTER including this month’s Freebie.

There is a growing body of research that suggests that for some people, even slight changes in the acid balance in their brain may be linked with panic disorder and other psychiatric conditions. Recent findings provide further evidence that such links are real – and suggest they may extend to schizophrenia and bipolar disorder.

The human brain frequently undergoes changes in acidity, with spikes from time to time. One main cause of these temporary surges is carbon dioxide gas, which is constantly released as the brain breaks down sugar to generate energy. Yet the overall chemistry in a healthy brain remains relatively neutral because processes such as respiration—which expels carbon dioxide—help to maintain the status quo. As a result, fleeting acid-base fluctuations usually go unnoticed.

There were earlier hints of the acid-disorder link: studies that directly measured pH—a metric of how acidic or basic something is—in dozens of postmortem human brains revealed lower pH (higher acidity) in patients with schizophrenia and bipolar disorder. Multiple studies in the past few decades have found that when people with panic disorder are exposed to air with a higher than normal concentration of carbon dioxide – which can combine with water in the body to form carbonic acid – they are more likely to experience panic attacks than healthy individuals are. Other research has revealed that the brains of people with panic disorder produce elevated levels of lactate, an acidic source of fuel that is constantly generated and consumed in the energy-hungry brain.

Yet researchers have continued to puzzle over whether this acidity is truly disorder-related or stems from other factors, such as antipsychotic drug use or a person’s physical condition just before death. For example, if a person is dying slowly, there is a longer period for which there is a greater chance that their oxygen levels would be low, and that will have an affect on the person’s metabolism. In this situation, the body and brain begin to rely more heavily on an oxygen-independent pathway to produce energy. This can lead to higher than normal lactate levels that subsequently decrease pH (i.e become more acidic).

Such questions prompted Tsuyoshi Miyakawa, a neuroscientist at Fujita Health University in Japan, and his colleagues to scour 10 existing data sets from the postmortem brains of more than 400 patients with either schizophrenia or bipolar disorder. Their aim was to test each of the leading theories about the acid-disorder connection.

First, the researchers controlled for potential confounding factors such as a history of antipsychotic medication use and age at death. As they had suspected, brain pH levels in people with schizophrenia and bipolar disorder were significantly lower (i.e. more acidic) than in healthy individuals. The team also examined five mouse models—rodents with mutations in genes associated with these conditions—and found similar results: The pH levels in the brains of about two dozen drug-free mice were consistently lower, and their lactate levels higher, than those in comparable healthy animals. What is more, the researchers had euthanised all the mice in the same way—which suggests the pH differences cannot all be explained by how long it takes to die.

These findings, published last autumn in Neuropsychopharmacology, collectively provide the most convincing evidence to date that the link between brain acidity and psychiatric disorders is real.

Although the postmortem findings are intriguing, it is hard to know if they are related to the pH changes in the living brain. Live brain-imaging studies of people with bipolar disorder, schizophrenia and panic disorder provide much more direct evidence for the acidity hypothesis. Using magnetic resonance spectroscopy, a method that can detect biochemical changes in tissue, scientists have consistently found elevated levels of lactate in these individuals’ brains.

Even as it becomes clearer that brain acidity may be a key characteristic of schizophrenia and bipolar disorder, whether this could be a cause or effect remains an open question. According to Miyakawa, one possibility is that the increased acidity results from higher than normal neuronal activity in the brains of people with these disorders. Another popular theory is that the greater acidity could be the result of impairments in mitochondria, the powerhouses of cells. These two hypotheses may not be mutually exclusive.

The next big question will be whether low pH in the brain can lead to the cognitive or behavioural changes associated with these disorders. There are suggestions that this is the case. It is known that receptors that are activated by acid have prominent effects on behaviour in animals. That implies that there may be changes in brain pH in the awake and functioning brain that scientists have not appreciated – yet!

Does Living in Crowded Places Drive People Crazy?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE NOVEMBER 2017 NEWSLETTER including this month’s Freebie.

You may be thinking: Yes, living under crowded conditions surely drives people crazy. And the reason may be traced back to some unfortunate rats.

In the mid-20th century ethologist John Calhoun wanted to see how overcrowding would influence social behaviour in rats. He placed rats in a confined space and allowed them to multiply with relatively little control. The results looked like scenes out of a horror movie: cannibalism, dead infants and complete social withdrawal, to name a few.

Calhoun’s rats captured public imagination and inspired a surge of research on the psychological effects of density in our own species. Some studies found that people living in crowded environments indeed showed a variety of social pathologies, just like Calhoun’s rats. But other studies did not. Reviews of the early research concluded that popular fears about overcrowding may be unfounded.

Now half a century has passed, and the world population has doubled. On the other hand, research on the psychological effects of density has all but disappeared. However recently a group of scientists headed by Douglas Kenrick at Arizona State University revisited this topic with a new tool called life history theory. It is about how all animals allocate their limited time and energy across life’s tasks, such as growing, mating and parenting. Aspects of the environment shape these allocation choices.

What does this have to do with density? One of life history theory’s earliest ideas was that environments of low density — where there are few individuals around— would favour organisms that adopt a “fast” life history strategy. This strategy focuses on quick reproduction and having many offspring but with little investment in each. Put simply, it is focused on the present and prioritises ‘quantity over quality.’

A low-density environment favours a fast strategy because it is presumed to have abundant resources with little social competition. Here fast reproduction would allow for full exploitation of the environment’s resources. Animals living in low-density environments also would not need to invest much in offspring, because it would be easy for those offspring to survive independently in such an environment.

But things get different when the environment gets crowded and strong social competition for resources and territory exists. To successfully compete, individuals now need to spend more time and energy building their own abilities. This often leads to a delay in reproduction. In a dense environment, one’s offspring also face greater social competition. Hence, it be more adaptive to focus time and energy on just a few offspring (to increase their abilities and competitiveness) instead of spreading resources over many offspring.

This approach is referred to as a “slow” life history strategy, and it prioritises ‘quality over quantity’. A slow life history also involves a psychology that plans for the future, given the need to build one’s abilities over time. The researchers had one simple question: Would higher densities also lead people to adopt a slower life history?

The team examined this idea in a variety of ways. First, we gathered data on country-level population densities and on a variety of psychological traits and behaviours related to life history. They did the same thing for the 50 U.S. states, where equivalent data were available.

Indeed, they found that across countries and across U.S. states, individuals in regions with denser populations showed traits that corresponded to the psychological profile of a slower life history. They were more likely to plan for the future, preferred long-term, committed romantic relationships, married later, had fewer children, and were more likely to invest in both their own and their children’s education. These relationships held when taking into account alternative factors, such as economic development and urbanisation.

To see if there might be similar effects in short-term situations, the team conducted experiments in which they had both undergraduates and slightly older adults read an article that talked about increasing population growth in the U.S. After reading the article, participants reported both their romantic relationship and family-size preferences. It was found that the undergraduates who read the density article preferred having a few committed romantic relationships (instead of many casual ones). The older adults who read the same article preferred to have fewer children and to invest more in each child (instead of investing less in many children).

Thus, in experiments, individuals led to think about increasing population densities also seemed to shift toward a slower life history, characterised by quality over quantity.

Many of us have intuitions about the effects of crowdedness. It is therefore useful to anticipate some questions. For instance, will higher densities always lead to a slow life history? No. In fact, when high densities are paired with unpredictable death or disease, life history theory predicts that a faster life history will emerge. A second critical point to consider is the nature of social competition. The assumption is that humans typically compete for resources by building skills and abilities (for example, through education). But this might not always be the case. In environments where competition is carried out by forms of lethal violence, we would once again expect higher densities to lead to a faster life history.

These are just some of the many unanswered questions about density. That said, perhaps a crowded life does drive people a little crazy—but not in the frightening ways expected from Calhoun’s rats. Instead it may make people obsessed about planning for the future, getting a good education, waiting for that perfect romantic partner and putting everything they have into that one child who is going to make them proud.

REFERENCES

■ Song, O., Neuberg, S. L., Varnum, M. E. W., & Kenrick, D. T. (2017). The crowded life is a slow life: Population density and life history strategy. Journal of Personality and Social Psychology, 112(5), 736-754.

 

Mind over Matter: Brain over Bowel?

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE OCTOBER 2017 NEWSLETTER including this month’s Freebie.

In the 1960s, a surgical technique to reduce stomach size (called bariatric surgery) was introduced to help obese patients lose weight. Doctors considered this primarily a mechanical fix. A smaller stomach, the reasoning went, simply cannot hold and process as much food. Patients get full faster, eat less and therefore lose weight.

This idea is in part true. But now scientists know that it is not nearly that simple. Recent science has revealed that appetite, metabolism and weight are regulated through a complex dialogue between bowel and brain—one in which mechanical influences, hormones, bile acids and even the microbes living in our gut all interact with labyrinthine neurocircuitry. Bariatric surgery, scientists are discovering, engages and may change all these systems. In the process, it is helping researchers map how this complicated interplay manipulates our eating behaviours, cravings and frenzied search for calories during starvation. This work could also reveal new targets—including microbes and possibly the brain itself— that render the risky surgical procedure obsolete altogether.

Brain Meets Bowel

We have all felt the physical effects of the gut-brain communion: the gastric butterflies that come with love, the rumbles that arise before delivering a speech. These manifestations result from the brain signaling to the gastrointestinal tract, both through hormones and neuronal signals.

Conversely, the gut can send signals back to the brain, too. In fact, coursing through our abdomen is the enteric nervous system, colloquially known as the second brain. This neural network helps to control food digestion and propulsion through the 30 feet of our gastrointestinal tract. It also communicates directly with the brain through the vagus nerve, which connects the brain with many of our major organs.

Two primary gut-brain pathways regulate appetite. Both systems involve a small, central brain region called the hypothalamus, a hotbed of hormone production that helps to monitor numerous bodily processes.

The first system comes into play during fasting. The stomach secretes the hormone ghrelin, which stimulates a region within the hypothalamus called the arcuate nucleus. This structure then releases neuropeptide Y, a neurotransmitter that, in turn, revs up appetite centers in the cerebral cortex, the outer folds of the brain, driving us to seek out food. In anticipation of mealtime, our brain sends a signal to the stomach via the vagus nerve, readying it for digestion. This can occur simply at the sight, smell or thought of food as our brain prepares our body for a meal.

The second gut-brain pathway suppresses our appetite. As we eat, several other hormones, including leptin and insulin, are secreted from fat tissue, the pancreas and the gastrointestinal tract. Separately, these hormones play many roles in digestion and metabolism. Acting together, they signal to another area of the hypothalamus that we are getting full. Our brain tells us to stop eating.

The appetite and satiety loop constantly hums along. Yet hunger pathways also interact with brain regions such as the amygdala, involved in emotion, and the hippocampus, the brain’s memory centre. Hence, our “gut feelings” and “comfort foods” are driven more by moods than mealtimes and nostalgic recollections of Grandma’s apple pie. As a result of higher thinking processes, food now has context. Food is culture. As playwright George Bernard Shaw put it, “There is no sincerer love than the love of food.”

Then there is the hedonistic thrill of sitting down to a meal. Eating also lights up our reward circuitry, pushing us to eat for pleasure independent of energy needs. It is this arm of the gut-brain axis that many scientists feel contributes to obesity.

Neuroimaging work confirms that, much like sex, drugs, gambling and other vices, food can cause a surge of dopamine release in the brain’s reward circuitry. This neurotransmitter’s activity serves as a powerful motivator, one that can reinforce dining for its own sake rather than just bodily survival. Researchers have found that for rats, sweetness surpasses even cocaine in its desirability. In humans, psychiatrist Nora Volkow, director of the National Institute on Drug Abuse, has confirmed what chocolate lovers everywhere already know: food’s effects on the reward system can override fullness and motivate us to keep eating. Such findings hint at a neurobiological overlap between addiction and overeating, although whether eating can be an outright addiction remains a controversial question.

The Surgical Solution

Thanks to the flow of messenger hormones and neurotransmitters, our mind and stomach are in constant communication. Disrupting this conversation, as bariatric procedures must do, will therefore have consequences.

Research has shown that in the days and weeks after bariatric surgery, sugary, fatty and salty foods become less palatable. One study, published in 2010 by Louisiana State University neurobiologist Hans-Rudolf Berthoud, found that rats lost their preference for a high-fat diet following gastric bypass surgery. In the 1990s multiple research teams had reported that after such surgery, patients often lose the desire to consume sweet and salty foods. More recently, a 2012 study by a team at Brown University found that adult patients had significantly reduced cravings for sweets and fast food following bariatric surgery. Similar findings in adolescent surgery patients also appeared in a 2015 study.

The alteration in cravings and taste may be caused by changes in the release and reception of neurotransmitters throughout the gut-brain system. In 2016 Berthoud and his colleagues found that in the short term—around 10 days postprocedure—bariatric surgery in mice caused additional meal-induced neural activity in brain regions known to communicate with the gut compared with brain activity before the surgery. Specifically, the boost in activity was seen in a connection leading from stomach-sensing neurons in the brain stem to the lateral parabrachial nucleus, part of the brain’s reward system, as well as the amygdala.

An expert in this area is biochemist Richard Palmiter of the University of Wash-ington. In a 2013 study published in Nature, Palmiter’s group used complex genetic and cell-stimulation techniques — including optogenetics, a means of controlling living tissue using light—to activate or silence specific neurons in the brain stem parabrachial nucleus pathway in mice. He found that engaging this circuit strongly reduced food intake. But deactivating it left the brain insensitive to the cocktail of hormones that typically signaled satiety—such that mice would keep eating.

Palmiter’s work suggests that engagement of the brain stem parabrachial pathway helps us curb our appetite. Because it is this same pathway that becomes unusually active postsurgery, it is probable that the hyperactivation Berthoud discovered is part of the gut-brain’s effort to assess satisfaction postsurgery. The brain must relearn how to be satisfied with smaller portions.

In other words, bariatric surgery is certainly a mechanical change: with less space, the body needs to adjust. Still, there is clearly more to the story. After the procedure, more undigested food may reach the intestine, and, Berthoud speculates, it would then trigger a hormonal response that alerts the brain to reduce food intake. In the process, it would alter the brain’s activity in response to eating. If he is correct, the surgery’s success—at least in the short term—may have as much to do with its effects on the gut-brain axis as it does on the size of a person’s stomach.

The Microbial Mind

There is another player in the complex communications of mind and gut that might explain bariatric surgery’s effects. Experts have implicated the microbiota—the trillions of single-celled organisms bustling about our digestive system—in countless disorders, including many that affect the brain. Our codenizens and their genome, the “microbiome,” are thought to contribute to autism, multiple sclerosis, depression and schizophrenia by communicating with the brain either indirectly via hormones and the immune system or directly through the vagus nerve.

Research by gastroenterologist Lee Kaplan, director of the Massachusetts General Hospital Weight Center, suggests that the microbiota may play a role in obesity. In a study published in 2013 in Science Translational Medicine, Kaplan and his colleagues transferred the gut microbiota from mice that had undergone gastric bypass surgery to those that had not. Whereas the surgery group lost nearly 30 percent of their body weight, the transplanted mice lost a still significant 5 percent of their body weight. (Meanwhile a control group that did not have surgery experienced no significant weight change.) The fact that rodents could lose weight without surgery, simply by receiving microbes from their postoperative fellows, suggests that these microbial populations may be at least partly responsible for the effectiveness of bariatric procedures.

A similar study, published in 2015 by biologist Fredrik Bäckhed of the University of Gothenburg in Sweden, found that two types of bariatric surgery—the Rouxen-Y gastric bypass and vertical banded gastroplasty—resulted in enduring changes in the human gut microbiota. These changes could be explained by multiple factors, including altered dietary patterns after surgery; acidity levels in the gastrointestinal tract; and the fact that the bypass procedure causes undigested food and bile (the swamp-green digestive fluid secreted by the liver) to enter the gut farther down the intestines.

As part of the same research, Bäckhed and his colleagues fed mice microbiota samples from obese human patients who either had or had not undergone surgery. All the rodents gained varying degrees of body fat, but mice colonised with postsurgical microbiota samples gained 43 percent less.

How might changes in our gut’s flora alter their interactions with the gut-brain axis and affect weight? Although the answer is still unclear, there are a few promising leads.

Specific gut microbial populations can trigger hormonal and neuronal signaling to the brain such that they influence the development of neural circuits involved in motor control and anxiety. Bäckhed suspects gut flora after bariatric surgery could have a comparable effect on brain regions associated with cravings and appetite.

The neurotransmitter serotonin could play a special role as well. About 90 percent of our body’s serotonin is produced in the gut, and in 2015 researchers at the California Institute of Technology reported that at least some of that production relies on microbes. Change the microbes; change the serotonin production. And that could make quite a difference because, as numerous studies have confirmed, stimulating the brain’s serotonin receptors can significantly reduce weight gain in rodents and humans.

Treating the Gut-Brain Axis

It is a welcome turn of fate that bariatric surgery is illuminating new directions in treating obesity—which affects more than 600 million people worldwide. Some of these avenues could render surgery obsolete or at least reserved for the most extreme cases. Thus, at the forefront of battling excess weight may be hijacking the gut-brain axis.

In 2015, for example, the U.S. Food and Drug Administration approved a device that stimulates the vagus nerve to quell food cravings. A surgeon implants the device, made up of an electrical pulse generator and electrodes, in the abdomen so that it can deliver electric current to the vagus nerve. Although precisely how it works is unknown, the study leading to its approval found that patients treated for one year with this tool lost 8.5 percent more of their excess weight than those without the device.

That approach offers some patients a less invasive alternative to bariatric surgery, but for the moment, vagus nerve stimulators are not as effective as many other obesity therapies. Meanwhile a number of intrepid neurosurgeons are investigating the use of a technique called deep-brain stimulation. Approved for use in Parkinson’s disease and obsessive-compulsive disorder, the procedure involves stimulating specific brain regions using implanted electrodes. Although this research is in its infancy, numerous brain regions involved in appetite control are being explored as possible targets.

The Mayo Clinic believes that in the future the best approach to treating obesity will be highly personalised. They consider obesity to be a disease of the gut-brain axis in which the part of the axis which is abnormal needs to be identified in each patient in order to personalise treatment.

In 2015 Acosta Cardenas from The Mayo Clinic and his colleagues looked at numerous factors potentially related to obesity in more than 500 normal-weight, overweight and obese patients. Among the factors were how quickly the study subjects got full, how quickly their stomachs emptied, hormone levels in response to eating and psychological traits. Acosta Cardenas’s findings support the idea that there are clear subclasses of obesity and that the cause and ideal treatment of obesity is most likely unique to each patient. For example, 14 percent of the obese individuals in his study have a behavioural or emotional component that would steer his treatment recommendation away from surgery and medication and toward behavioural therapy. He can also foresee a future in which he might prescribe a probiotic or antibiotic for obesity patients with an abnormal microbiota.

REFERENCES

■ Conserved Shifts in the Gut Microbiota Due to Gastric Bypass Reduce Host Weight and Adiposity. Alice P. Liou et al. in Science Translational Medicine, Vol. 5, No. 178, Article 178ra41; March 2013.

■ Roux-en-Y Gastric Bypass and Vertical Banded Gastroplasty Induce Long-Term Changes on the Human Gut Microbiome Contributing to Fat Mass Regulation. Valentina Tremaroli et al. in Cell Metabolism, Vol. 22, pages 228–238; August 4, 2015.

■ Eating in Mice with Gastric Bypass Surgery Causes Exaggerated Activation of Brainstem Anorexia Circuit. Michael B. Mumphrey et al. in International Journal of Obesity, Vol. 40, No. 6, pages 921–928; June 2016.

■ Mind Over Meal. B. Stetka in Scientific American Mind, July/August 2017, pages 27-33.

■ Are Microorganisms Making You Moody?

https://jaynejubb.com/november2012article.htm

Rethinking Relief

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE SEPTEMBER 2017 NEWSLETTER including this month’s Freebie.

The United States is in the grip of an unprecedented public health crisis – and unfortunately one in which well-meaning doctors have played a part. Due to chronic pain health issues the sales of opioid drug painkillers on prescription has quadrupled between 1999 and 2014. In 2012 alone, doctors issued 259 million opioid prescriptions – enough to give a bottle of pills to every adult in the entire United States. And in 2015 more than half of all overdose deaths in the USA involved opioids – either pain medications (such as OxyContin and Vicodin) – or illicit substances, such as opium and heroin. To put that statistic in perspective, opioids claimed roughly as many lives that year as car crashes.

Addiction is undoubtedly part of the problem, but experts now agree that the real driver behind the opioid epidemic is chronic pain. According to a landmark study published in 2011 by the Institute of Medicine, an estimated 100 million American adults live with persistent or chronic pain. Many rely on opioids just to keep moving.

There is no question that these drugs provide an excellent defence against acute, short-term pain, which alerts us to an injury or disease and subsides during recovery. But chronic pain is fundamentally different. It lingers long after an injury has healed and can produce a variety of symptoms, from headaches to body aches to crippling fatigue. It may stem from an underlying condition, such as osteoarthritis or multiple sclerosis. But it can often have no obvious source.

For some, chronic pain begins with nerve damage from diabetes, chemotherapy, a virus, a car accident or some other occurrence. In these cases, injured nerve fibres mistakenly continue to send pain signals to the brain, causing what is known as neuropathic pain.

No matter how chronic pain starts, it often increases and spreads, leaving many people reaching or more pills. Unfortunately, higher doses of opioid drugs do not guarantee relief—and can actually make matters worse. For starters, patients build tolerance to these medications, so that over time, it takes more opioids to blunt the same levels of pain. Higher doses increase the risk of dangerous side effects, including addiction, coma and death. And recent research shows that even relatively low doses of opioids can also cause hyperalgesia, or an increased sensitivity to pain: sometimes these drugs intensify the very pain they are meant to suppress.

For these reasons, a significant number of chronic pain sufferers eventually find themselves caught in a delicate – and deadly – balancing act: they need to take more opioid medications to keep their disabling pain in check while somehow dodging the drugs’ serious and life-threatening side effects. Some succeed for decades. But those who lose their footing are flooding casualty departments and hospital beds, battling withdrawal, accidental overdose and a host of other opioid-related complications.

Last year medical authorities began to respond on several fronts. In March 2016 the Centres for Disease Control and Prevention issued stricter guidelines for prescribing opioids. Contrary to what has been common practice, it advised against treating chronic pain with these drugs unless the benefits clearly outweigh the risks. The Surgeon General (Vivek Murthy) amplified that message five months later, when he wrote directly to all the nation’s health care providers— the first time any surgeon general has done so— urging 2.3 million professionals to commit to “turn the tide on the opioid crisis.”

The message is being heard. At a handful of state-of-the-art pain centres around the United States, clinicians are exploring a range of nondrug alternatives, from psychological interventions to complementary therapies. Researchers are also working on next-generation opioid drugs, along with new nonopioid pain-killers These initiatives represent the one upside to the opioid crisis: it is forcing medical professionals to revisit how they care for people in pain.

 

A Different Kind of Pain

Many experts now view chronic pain as a disease in its own right. Over time it engages and changes patterns of activity in brain areas associated not only with physical sensations but with sleep, thought and emotion. No wonder that studies show that chronic pain is associated with higher rates of mortality, sleep disorders, depression and anxiety.

Many chronic pain patients take a cocktail of drugs that would be deadly for a non-chronic pain sufferer. According to a Washington Post/Kaiser Family Foundation survey conducted in Autumn 2016, among people taking prescription painkillers for at least two months, about a third said they did not receive information about the dangers of opioids from their doctor. Only a third said their doctor had outlined a plan to wean them off the drugs. And another third reported that their doctor had never discussed any complementary treatments beyond medications. To treat people more effectively will require an important shift in how we think about pain – something alternative/natural medicine practitioners are well-acquainted with. Scientists are catching up in their understanding that pain is not just a sensation but a brain state and that mind-body (or body-mind) interventions may be helpful.

A team at Stanford University brings together pain psychologists, pain management physicians, psychiatrists, neurologists, anesthesiologists, physical and occupational therapists, and nurse practitioners, who collaborate to help patients safely reduce their use of opioids and replace them with non-drug alternatives. The team members meet every week to fine-tune evolving treatment plans that might incorporate cognitive-behavioural therapy (CBT), physical therapy, mindfulness training, yoga, biofeedback and acupuncture. Above all, it is a customised approach to suit the individual patient.

 

Turning Within

Taking such a broad approach is neither simple nor cheap – and better insurance coverage (in the USA!) of nondrug therapies will be needed to make it widely practical. Experts say the complexity of chronic pain warrants it. Perhaps the complementary therapy that has garnered the most attention in recent years is mindfulness-based stress reduction (MBSR), a clinical and more mainstream adaptation of Buddhist meditation practices. Jon Kabat-Zinn, now a professor of medicine emeritus at the University of Massachusetts Medical School, developed MBSR in the 1970s. Since then, MBSR classes are available in more than 30 countries. A growing body of evidence suggests that MBSR—which encourages nonjudgmental awareness of the present moment and fosters greater mind-body awareness—can mitigate a variety of ailments, from cancer and depression to drug addiction and chronic pain.

In 2016 Daniel Cherkin and his colleagues tested three treatments for chronic low back pain in 342 young and middle-aged adults: MBSR, cognitive-behavioural therapy – designed to change pain-related thoughts and behaviours – and standard pain care. They found that compared with participants who received standard pain care, more patients receiving MBSR or CBT showed a significant drop in “pain bothersomeness” after 26 weeks. In addition, the MBSR and CBT groups improved more in their functional abilities.

Other chronic pain sufferers are making gains with biofeedback. Using sensors to monitor bodily signals such as muscle tension and heart rate, they build awareness of physiological processes and learn to modulate their own pain. A 2017 meta-analysis evaluated biofeedback for chronic back pain in 1,062 patients and found that it not only lowered pain intensity but also improved patients’ coping abilities and reduced the incidence of pain-related depression. Others have tested a more sophisticated technique called neuro- feedback, which provides patients with images of their own brain activity using electroencephalography or functional MRI. This kind of training can teach patients to control brain regions associated with pain processing.

Additional evidence suggests that acupuncture might help ease chronic pain in some cases. The practice remains controversial, in part because it is difficult to study. But a 2014 analysis of 29 clinical trials of acupuncture for chronic pain in nearly 18,000 patients showed that compared with treatment with no needles or misplaced needles, the traditional form – with needles placed according to centuries-old Chinese practice – produced greater pain relief. At the same time, a significant number of people in the control groups also saw benefits, suggesting a strong placebo effect.

That finding reinforces the idea that when it comes to pain, simply being under the care of a receptive health care professional can be palliative. Researchers are investigating how all these complementary treatments work. Thankfully they don’t seem to be waiting for basic science to tell them the optimal way to treat pain. There is broad agreement that mindfulness, yoga, biofeedback and acupuncture may succeed by changing patients’ relationship to their pain rather than actually lowering the intensity of the physical sensation. If patients are suffering then it would seem logical (and human) to find what really works from the various diverging modalities available.

The NCCIH (National Center for Complementary and Integrative Health) recently conducted an extensive review of published clinical trials for a variety of complementary therapies with the aim of finding out which treatments might work best for which patients. It found that acupuncture and yoga benefited people with chronic back pain the most. Acupuncture and tai chi proved most helpful for those with chronic pain resulting from osteoarthritis. Massage therapy provided short-term benefits for neck pain, and relaxation techniques were most effective in those with severe headaches and migraines.

 

Feeling Your Pain

There is another reason why individualised care makes sense for chronic pain: different people can experience the same kind of pain in very different ways. In particular, researchers are discovering that how much chronic pain affects any one person depends heavily on so-called biopsychosocial factors – how someone reacts to pain emotionally, what other sources of stress exist, how much social support surrounds the person. Targeting these influences can not only reduce patients’ experience of pain but dramatically improve their quality of life. Indeed, chronic pain–related disabilities often leave people isolated and cut off from friends, which can, in turn, make the pain more intense.

To identify biopsychosocial factors up front, patients at the Stanford clinic fill out an extensive online questionnaire, capturing everything from work histories and adverse childhood experiences to sleep habits and anger levels. The practitioners there believe that collecting this type of data holds the key to matching patients with effective treatments. The questionnaire is part of a free, open-source repository that has been created, together with researchers at the NIH (National Institute of Health). The system, called the Collaborative Health Outcomes Information Registry (CHOIR), is now in use at medical centres around the USA and soon will be in several other countries. It contains data from more than 15,000 patients. Health care providers can use the system to track patients’ progress over time and to compare their trajectories with similar cases.

This data set has revealed that one factor in particular – a mindset called catastrophising – predicts the impact of chronic pain on a person’s life far better than any other measure. At its core, catastrophising is a tendency to exaggerate or magnify the threat of pain, to fear the worst and remain focused on the experience of pain. For people trapped in this way of thinking, their pain feels overwhelming. They hold little hope that they will ever be well again. That leads to a very strong desire to escape the pain, and they reach for the pain medication. Because catastrophising is such a powerful force on the experience of pain, it seems like a stroke of genius to target it.

For many this sense of powerlessness is common – and doctors who dismiss chronic pain because they cannot explain it only compound that feeling. When surgeries or other treatments fail to help, patients learn to expect failure. They become very demoralised. When patients go to the Stanford clinic the practitioners’ first job is to ‘remoralise’ them! The initial step is giving patients back a sense of control, no matter how small. They often need to know that their pain is real, that it is not their ‘fault’ and that there are some ways that it can be addressed. The patients are invited to learn about how pain and biopsychosocial factors interact. They may receive a relaxation CD so that the auditory experience recorded on the CD works to calm the nervous system. And they are encouraged to think of listening to the CD as taking a dose of mind-body medicine!

 

REFERENCES

■ Acupuncture for Chronic Pain. Andrew J. Vickers and Klaus Linde in JAMA, Vol. 311, No. 9, pages 955–956; March 5, 2014.

■ The Effectiveness and Risks of Long-Term Opioid Therapy for Chronic Pain:

A Systematic Review for a National Institutes of Health Pathways to Prevention Workshop. Roger Chou in Annals of Internal Medicine, Vol. 4, No. 162, pages 276– 286; February 17, 2015.

■ National Pain Strategy: A Comprehensive Population Health-Level Strategy for Pain. Interagency Pain Research Coordinating Committee. u.S. Department of Health and Human Services, 2016. https://iprcc.nih.gov/National_Pain_Strategy/ NPS_Main.htm

■ Doctors are Breaking Away. Stephani Sutherland in Scientific American Mind, Vol. 28, No. 3, May/June 2017, pages 28-35.

■ Effect of Mindfulness-Based Stress Reduction vs Cognitive Behavioral Therapy or Usual Care on Back Pain and Functional Limitations in Adults with Chronic Low Back Pain: A Randomized Clinical Trial. Daniel C. Cherkin et al. in JAMA, Vol. 315, No. 12, pages 1203–1299; March 22, 2016.

■ Evidence-Based Evaluation of Complementary Health Approaches for Pain Management in the United States. Richard L. Nahin et al. in Mayo Clinic Proceedings, Vol. 91, No. 9, pages 1292–1306; September 2016.

■ Efficacy of Biofeedback in Chronic Back Pain: A Meta-analysis. Robert Sielski et al. in International Journal of Behavioral Medicine, Vol. 24, No. 1, pages 25–41; February 2017.

Hardwired for Humour

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JULY 2017 NEWSLETTER including this month’s Freebie.

Laughter is universal.

It is a hardwired response that comes online early— in the first four months of life— regardless of culture or native language. Whether a child is raised in The Netherlands or Nigeria, Peru or Pakistan, his or her first laugh will delight her parents at about 14 to 18 weeks of age. A baby’s laugh is easily recognisable, partly because of its genuineness. Like crying, it is hard to fake and, like yawning, is contagious. Its authentic quality makes it hard for parents to ignore. Scientists, on the other hand, have only recently caught on to its significance.

Of course, laughter is not exclusively an expression of amusement. In adults, it can occur in many emotional contexts, including when people are nervous, as a response to others’ laughter or more simply when in the company of other people. But why do infants laugh? It is not so much a question of what they find funny. There is no universal joke for infants. Instead we must consider how infants extract humour from their environment.

In contrast to crying, which clearly urges an infant’s caregiver into action, laughter seems like an emotional luxury. The fact that a three-month-old can have access to this ability— long before other major milestones such as talking and walking—suggests that her chortles, sniggers and guffaws have an ancient and important origin. Laughter can reveal a considerable amount about infants’ understanding of the physical and social world.

Baby Darwin

Laughter precedes language both in infancy and in the evolutionary chain, having been prioritised and preserved by nature. Indeed, several species, including chimpanzees, other apes and squirrel monkeys, engage in vocalisations during play that resemble laughter. These mammals— especially juveniles—display signature breathy and rhythmic sounds while frolicking together.

Evolutionary neuropsychologist Jaak Panksepp has shown that the brains of all animals contain the neural circuitry engaged in human laughter. These areas include emotional and memory centres, such as the amygdala and hippocampus. Laughter seems to bubble up from below the surface of the cortex as an involuntary response while activating the pleasure systems in the brain. Famously, Panksepp has even documented (by using technologies that allow humans to hear very high frequencies) that rats emit a rhythmic chirping sound when “tickled.”

In humans, infant laughter has gained the attention of a few prominent scholars. In the fourth century B.C., Aristotle posited that the first laugh marked an infant’s transition to humanness and served as primary evidence of that infant having acquired a soul. In 1872 Charles Darwin hypothesised that laughter, like other postural, facial and behavioural expressions of emotion, served as a social signal of “mere happiness or joy.” In his landmark volume, The Expression of the Emotions in Man and Animals, Darwin meticulously described the laughter of his own infant son, writing: “At the age of 113 days these little noises, which were always made during expiration, assumed a slightly different character, and were more broken or interrupted, as in sobbing; and this was certainly incipient laughter.”

Psychology, however, neglected the topic for decades. For most of its history, the discipline has primarily focused on negative emotions such as anger, depression, anxiety and major mental illness. This trend started to change about 40 years ago, when some psychologists began studying resilience to adversity, happiness and the psychology of well-being. A whole new subfield known as positive psychology was born.

Furthermore, it is only within the past 30 years that developmental psychologists have had methodologies for making inferences about infant cognition and emotion. One such method, the “gaze paradigm,” involves timing the duration of an infant’s stare. Several studies have demonstrated that babies will gaze longer at a novel object, which at its most basic level reveals that they can differentiate it from a familiar one.

In 1985 psychologists Elizabeth Spelke and Renée Baillargeon used the gaze paradigm to study infants’ conceptual knowledge. Spelke and Baillargeon began presenting infants with possible and impossible scenarios—for example, one object, in keeping with natural laws, would not penetrate a solid barrier, but a second, similar object would appear to do so. They found that babies gazed longer at unexpected events. These findings led researchers to deduce that infants come equipped with some simple expectations about how objects behave, which, when violated, results in their rapt attention. Such violations, it turns out, are powerful catalysts for humour.

Funny Business

Stand-up comedians often exploit expectations to make audiences laugh. They build suspense and push the boundaries of norms and acceptability to provoke our laughter, whether with puns, jokes or witty retorts. For something to be funny, the person telling a joke and the person hearing it need some common knowledge. Humour therefore requires at least some rudimentary understanding of the physical and social world. This understanding can be based on experience and observation, which provide the foundation for what is “ordinary.” With that baseline, we can differentiate the ordinary from the absurd.

Research from Gina Mireault’s lab shows that infants as young as five months, just a month after laughter comes online, can independently manage this basic perceptual difference. In 2014 she and her colleagues published findings from an experiment in which they presented 30 infants with ordinary and absurd events. For example, an experimenter might squish and roll a red foam ball as an ordinary scenario, then wear it as a nose in an absurd iteration of that event. Not only did infants distinguish between the two, they laughed at the latter. The key finding was that their laughter was not made in imitation; it occurred even when the experimenter and infants’ parents were instructed to remain emotionally neutral.

Just a few months later, at about eight months of age, infants can be effective comedians and understand how to make others laugh without using any words. Psychologist Vasudevi Reddy calls this nonverbal form of humour “clowning.” She has documented babies from eight to 12 months engaged in numerous forms of clowning—for example, exposing their naked tummy while shaking back and forth, attempting to put their toes in a caregiver’s mouth while laying supine, or snatching a clean diaper and feigning disgust, followed by a smile.

Infants this age also engage in teasing, such as smiling coyly as they intentionally disobey a parent’s directive not to climb the stairs or offering the dog a biscuit, only to snatch it quickly back with a cheeky grin. Such “fake outs” have been reported even earlier by parents of six-month-olds, at which point infants can employ fake laughter (or tears) to draw attention to themselves or be included in an interaction that others are enjoying without them. Recall that laughter is difficult to fake, so these displays are easily detected.

Most important, infants create these novel interactions. They decide when and with whom to employ these techniques. As such, these types of playful, teasing exchanges can give us a window into infants’ awareness. Teasing in particular requires at least a rudimentary understanding of others’ minds, a desire to engage, and a guess or prediction as to how to provoke the mind of someone else. To trick someone else means to know that someone else can, in fact, be tricked. This knowledge, referred to as a theory of mind, is a mature insight that has traditionally been credited only to children who are at least four years old. Although infants do not have the mind theory sophistication of older children, their ability to effectively tease and provoke others suggests they have at least some level of awareness.

Great Expectations

Clowning and teasing reflect the primarily social nature of humour, but for something to make us laugh aloud in amusement, we need more than just the presence of other people. After all, infants spend most of their time with others, though little of their time laughing. This is because humour—whether for adults or infants—also requires a cognitive component: incongruity.

Incongruity refers to a situation that psychologist Elena describes as misexpected, meaning it creates a misalignment between what the infant expects and what she or he experiences. Misexpected events are slightly out of the ordinary. In contrast, truly unexpected happenings are completely shocking or surprising— and, as such, can be perceived as more disturbing or amazing than humourous. For example, when a cup is worn as a hat, it does not match the infant’s prior experience with cups (or with hats). If the cup transformed into an antelope, the situation would be totally unexpected.

Adults, children and infants alike find unexpected events interesting but not necessarily funny. Multiple explanations arise from the research employing the violation of expectation paradigm. When infants are presented with violations of natural physical laws— such as gravity, solidity, inertia or quantity— they stare at these “magical” events, but they do not laugh. If we contextualise Hoicka’s ideas into the larger research on infant gaze and interest, we can speculate that perhaps humour relates to misexpectations of social behaviour. A toy flying through space and defying gravity is cause for wonderment. But Grandma wearing that toy on her head? Absolutely hilarious.

Humour theorists present one possible explanation through a phenomenon called incongruity resolution. To perceive an incongruity as humorous requires that the incongruity be resolved, which means understanding its cause or getting to the “punch line.” The aha moment at which a listener decodes the nuance or double entendre of a verbal joke, for example, is the moment of resolution. It is the point at which the incongruous nature of why “a guy walks into a bar” becomes humorous, whether or not it is accompanied by overt laughter.

Forty years ago many cognitive psychologists argued that infants were not sophisticated enough to resolve incongruity. Psychologists Diana Pien and Mary Rothbart proposed that humour perception does not necessarily require advanced cognitive skills. In a study published in 2012 Gina Mireault put that idea to the test.

Mireault’s group asked 30 parents to “do whatever you normally do to get your baby to laugh or smile,” they resorted to wildly exaggerated “clowning.” Blowing raspberries, making odd faces and walking like a penguin. They are major permutations of ordinary daily interactions. At the very least, such behaviour gets a baby’s attention. Starting when the children were three and four months of age, the researchers tracked these families through their first year and found that 40 percent of the youngest children laughed in response to their parents’ antics; by five and six months, 60 percent of the infants laughed.

Infants need not do much to resolve these misexpectations to find them funny. In fact, there are at least three clues available to them. Social context is one example: these absurd acts are performed by a social partner, which may be enough to bias the infant toward interpreting the behaviour as positive. Parents typically pair clowning with their own smiling or laughing about 65 percent of the time. This combination signals that the antics are safe, satisfying and joyful.

A second factor is familiarity. Social partners often repeat silly actions over and over again until the infant laughs and then because she or he has laughed. It is possible that the caregiver’s repetition allows the infant to either predict the action and its outcome— a resolution in itself— or infer the intentionality of the act. That Dad is balancing a spoon on his nose is not an accident if he repeats the act several times. Psychologist Amanda Woodward has shown that, by their first birthdays, infants can infer intention from others’ actions and speech.

A third element that may help babies differentiate between magical and humourous incongruities is that the latter are possible. Ultimately there is nothing magical about Mum wearing a cup as a hat. The nonmagical nature of humourous events may move infants, as well as children and adults, beyond that initial state of wonder to a final state of humour.

Whatever their strategy, experimental evidence shows that although infants begin to laugh at humourous events at about five months of age, they can detect such activities even earlier. Four-month-olds in this study gazed at humourous events with intense interest, registering a significant heart rate deceleration. This physiological response is exhibited when they display the same interest in a stimulus, as well as when they smile.

Psychologist Stephen Porges proposes that heart rate deceleration does not necessarily reflect joy so much as prime the infant for it. When babies are confronted with something novel, they stare at it, a response that is accompanied by a heart rate deceleration. Porges suggests that this physiological calm acts as a kind of resource, allowing the infant to remain oriented toward a novel and nonthreatening stimulus. When this reaction is combined with their bias toward sociability, young infants may benefit from this calming response by finding pleasure in absurdity.

All Together Now

Gina Mireault’s work suggests that infants truly can perceive and create humour. But not all laughter relates to amusement. Although there is no evidence of infants laughing in discomfort, we know that adults can and do laugh without mirth. That observation may provide insight into its deeper purpose.

No matter how it is deployed, laughter is social. Robert Kraut and the late Robert Johnston ushered in the field of evolutionary psychology with a landmark 1979 study demonstrating that—among other things—bowlers were more likely to smile not after achieving a strike but after facing the audience following a strike. Psychologist Robert Provine found that laughter is 30 times more likely to occur in the company of other people, regardless of whether anything amusing is happening. Provine’s research shows that laughter usually follows banal comments such as, “I better be going!” or “Great to see you!” rather than comedic punch lines. In addition, people can be amused and not laugh at all.

For youngsters at play, laughter seems to signal both positive emotion and affiliation with one another. Evolutionary psychologists Robin Dunbar and Guillaume Dezecache have proposed that laughter keeps us connected and in harmony as adults when we have long given up rough-and-tumble romps. This idea is especially supported by the contagious quality of laughter in groups of people, including strangers.

Laughter, therefore, serves as a kind of social glue, with many possible meanings. Someone’s nervous giggle may prompt peers to provide comfort or assurance, and a mischievous chuckle can signal when roughhousing is meant purely in jest. Hoicka has described what she calls a “humourous frame,” in which social partners can interact in such a way that both actors interpret an interaction – such as teasing – as positive. Indeed, four- to six-month-old infants are poised for positive emotion. Not yet wary of strangers or of separation from primary caregivers, infants are ready for interaction with anyone, increasing their opportunities for play, smiling and laughter at just the moment when that new response is available to them. From an evolutionary perspective, this joint emergence of laughter and sociability is wise.

Laughter— it turns out— has a serious side. Its value as a social signal and mammalian superglue explains why it comes “factory-installed” as part of infants’ native hardware. At four months of age, infants’ laughter most likely is neurologically jump-started by their intense attention toward novelty and the salience of the broad social context. But within one month, babies have enough cognitive sophistication to detect and interpret new, nonthreatening social events as funny, all by themselves. A few months later they can produce such events, too, much to the joy of everyone.

REFERENCES

Infant Clowns: The Interpersonal Creation of Humour in Infancy. Vasudevi Reddy in Enfance, Vol. 53, No. 3, pages 247–256; 2001.

Laughing Matters. Gina C. Mireault in Scientific American Mind, Vol. 28, No. 3, May/June, pages 44-49; 2017.

How Infants Know Minds. Vasudevi Reddy. Harvard university Press, 2008.

Humor in Infants: Developmental and Psychological Perspectives. Gina C. Mireault and Vasudevi Reddy. Springer, 2016.

The Science of Funny

Posted Posted in Jayne's blog

DOWNLOAD THE ENTIRE JUNE 2017 NEWSLETTER including this month’s Freebie.

Laughter comes in many flavours: the giddy giggle, the mild chuckle, the lusty guffaw, the sarcastic “ha!” Its meaning is just as varied, signalling everything from amusement to discomfort to disdain. For researchers, understanding how our brain interprets this complex behaviour is serious business.

 

Real or false hilarity: our brain knows the difference

Most of us will laugh at a good joke, but we also laugh when we are not actually amused. Fake chuckles are common in social situations—such as during an important interview or a promising first date. Laughter is interesting because we observe it across all human cultures and in other species and it is an important social signal.

In a 2013 study Carolyn McGettigan and her colleagues scanned the brains of 21 participants while they passively listened to clips of laughter elicited by funny YouTube videos or laughter produced on command (with instructions to sound as natural as possible). Subjects whose medial prefrontal cortex “lit up” more when hearing the posed laughter were better at detecting whether laughs were genuine or not in a subsequent test. (This medial prefrontal cortex brain region is involved in understanding the viewpoint of others.) If you hear a laugh that seems ambiguous in terms of what the person means then it makes sense that you will try to work out why the person is doing this.

In a follow-up study in 2016, McGettigan and her colleagues recruited a fresh set of participants to rate the laugh tracks on various qualities, such as authenticity and positivity. They compared these findings with the original brain data and found that the activity in the medial prefrontal cortex was negatively correlated with the genuineness of the laughs: the more active the medial prefrontal cortex, the less genuine the laugh. Their analyses also revealed that both types of laughter engaged the auditory cortices, although activity in these brain regions increased as the laughs became happier, more energetic and more authentic. These results suggest that the brain is not working hard to classify laughs but that it is working hard to figure out the vocaliser’s intention. This is important in terms of evolution: it is good to be able to detect if someone is authentically experiencing an emotion versus if they’re not, so that you are not fooled.

Teaching Robots to Laugh

Did you know that expressing humour is a key part to being human?

When robot Nao (see picture) laughs, he does so with his whole body: slapping his knees, shaking his head. But the adorable android, made by SoftBank Robotics, is not merely good at expressing mirth; he can correctly identify as much as 65 percent of happy laughter outbursts in humans, according to a study presented in 2015 at a nonverbal language workshop in The Netherlands. Once robots like Nao master human laughter, they will make far more likable and realistic companions.

Nao’s creators and other scientists are studying the minutiae of human laughter— acoustics, breath, body movements and vibrations—to translate them into algorithms that robots and avatars can learn. And that includes learning how to be funny. In 2016 researchers in South Korea and Singapore showed that Nao is already quite good at telling jokes. When he did a stand-up routine alongside an experienced actor, his taped performance was later consistently rated just half a point below the human on a scale of 1 to 7. Moreover, people were less disgusted by disparaging jokes if the robot told them. Taezoon Park, an industrial engineer says that in the future, scientists will optimise the robot’s tone of voice, facial expressions and subtle gestures to fine-tune his comedy.

Robots still have a long way to go to fully understand human laughter, which can signify anything from happiness and amusement to sexual interest, embarrassment or anger. Also baffling to machines is the fact that laughter can vary: there is the classic ha-ha- ha laughter, speech laughter (when you speak while laughing) and smile speech (talking while smiling). Distinguishing among these types will be vital for better human-robot interactions. Since laughter is such a crucial part of what it means to be human, artificial intelligence will not be convincing until machines can laugh along with us…..

Further, for robots to laugh convincingly with humans, they must be able to tell when a person wants such an interaction. Apparently an ‘inviting laugh’ is longer and louder and has a higher pitch than an ‘isolated laugh’. According to computer scientist, Khiet Truong, humans respond to an inviting laugh within half a second on average. Robots will need to do the same—otherwise it is no longer natural (robots=natural??).

If these efforts succeed, we may soon have humourous robots and avatars that can assist the elderly, cheer up hospital patients, play with kids and help keep us amused. I wonder how long it will take researchers to realise that human contact and laughter is irreplacable even with a laughingly-accurate android?

No Laughing Matter

Humour has been touted as a panacea that boosts the immune system, smoothes the way to success at work and even helps us to live longer. But for some people, chuckles are no laughing matter.

Those who suffer from gelotophobia, or fear of being laughed at (but it sounds more to me like a fear of ice-cream), dread even well-intentioned jokes. They don’t trust friendly laughter—that someone is just enjoying themselves. Any laughter is bad laughter. Willibald Ruch recalls one case he observed in his laboratory: the person would always wait for the next bus if no seat in the last row was free. He could not stand the idea that someone would sit behind him and laugh.

Like most phobias, this one exists on a spectrum from mild to severe. To assess the extent of the problem, scientists ask people to rate how much they identify with statements such as “It takes me very long to recover from having been laughed at” or “When others laugh in my presence, I get suspicious.” Studies across the globe suggest anywhere between 1.6 and 13 percent of people suffer from gelotophobia. The lowest numbers are seen in countries where people are more equal, such as Denmark and the Netherlands, and very high scores are seen in countries where honour is particularly important and shame is used for social control such as some Asian countries.

Researchers are just beginning to understand how gelotophobia develops. In addition to culture, parenting may play a role. In a study of 100 families, mothers and fathers who were prone to punishment and control were more likely to have kids who feared laughter. Several studies have shown that gelotophobes were often victims of bullying. Also, a 2012 study suggested a partial overlap with social anxiety, finding that 36 percent of gelotophobes meet the criteria for the disorder.

Brain-imaging studies show that gelotophobes process humour differently from other people. A 2016 electroencephalographic (EEG) study revealed that when the former listen to the sounds of laughter or angry shouting, they show more activity along pathways linking their prefrontal and posterior cortices. The study’s lead author, Ilona Papoušek, a psychologist at the University of Graz in Austria, believes this linkage shows they are more sensitive to actual or supposed malicious aspects of laughter.

Another experiment published in 2016 showed that compared with a control group, gelotophobes have lower activation in their brain’s reward circuits when listening to jokes. It remains unclear, though, what comes first: gelotophobia or atypical processing of laughter in the brain.

The good news, Ruch suspects, is that gelotophobia should respond to the same kind of therapies used for other phobias. The bad news is it might be hard to convince someone who dreads laughter to visit a therapist, who might smile at patients to put them at ease.