A Momentary Flow

Updating Worldviews one World at a time

Each morning, we wake up and experience a rich explosion of consciousness — the bright morning sunlight, the smell of roast coffee and, for some of us, the warmth of the person lying next to us in bed. As the slumber recedes into the night, we awake to become who we are. The morning haze of dreams and oblivion disperses and lifts as recognition and recall bubble up the content of our memories into our consciousness. For the briefest of moments we are not sure who we are and then suddenly ‘I,’ the one that is awake, awakens. We gather our thoughts so that the ‘I’ who is conscious becomes the ‘me’ — the person with a past. The memories of the previous day return. The plans for the immediate future reformulate. The realization that we have things to get on with remind us that it is a workday. We become a person whom we recognize. The call of nature tells us it is time to visit the bathroom and en route we glance at the mirror. We take a moment to reflect. We look a little older, but we are still the same person who has looked in that same mirror every day since we moved in. We see our self in that mirror. This is who we are. The daily experience of the self is so familiar, and yet the brain science shows that this sense of the self is an illusion. Psychologist Susan Blackmore makes the point that the word ‘illusion’ does not mean that it does not exist — rather, an illusion is not what it seems. We all certainly experience some form of self, but what we experience is a powerful depiction generated by our brains for our own benefit.

The Self Illusion: How Our Social Brain Constructs Who We Are | Brain Pickings
Google makes us all dumber: The neuroscience of search engines -As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions - Ian Leslie
In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”
We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection. This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent. Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.” Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear. I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.
go read this..
(via Google makes us all dumber: The neuroscience of search engines - Salon.com)

Google makes us all dumber: The neuroscience of search engines
-
As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions
-
Ian Leslie

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection. This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent. Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.” Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear. I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

go read this..

(via Google makes us all dumber: The neuroscience of search engines - Salon.com)

'Love hormone' controls sexual behavior in mice
-
A small group of neurons that respond to the hormone oxytocin are key to controlling sexual behaviour in mice, a team has discovered. The researchers switched off these cells which meant they were no longer receptive to oxytocin. This “love hormone” is already known to be important for many intimate social situations. Without it, female mice were no more attracted to a mate than to a block of Lego, the team report in journal Cell. These neurons are situated in the prefrontal cortex, an area of the brain important for personality, learning and social behaviour. Both when the hormone was withheld and when the cells were silenced, the females lost interest in mating during oestrous, which is when female mice are sexually active. At other times in their cycle they responded to the males with normal social behaviour. The results were “pretty fascinating because it was a small population of cells that had such a specific effect”, said co-author of the work Nathaniel Heintz of the Rockefeller University in New York. “This internal hormone gets regulated in many different contexts; in this particular context, it works through the prefrontal cortex to help modulate social and sexual behaviour in female mice. (via BBC News - ‘Love hormone’ controls sexual behaviour in mice)

'Love hormone' controls sexual behavior in mice
-
A small group of neurons that respond to the hormone oxytocin are key to controlling sexual behaviour in mice, a team has discovered. The researchers switched off these cells which meant they were no longer receptive to oxytocin. This “love hormone” is already known to be important for many intimate social situations. Without it, female mice were no more attracted to a mate than to a block of Lego, the team report in journal Cell. These neurons are situated in the prefrontal cortex, an area of the brain important for personality, learning and social behaviour. Both when the hormone was withheld and when the cells were silenced, the females lost interest in mating during oestrous, which is when female mice are sexually active. At other times in their cycle they responded to the males with normal social behaviour. The results were “pretty fascinating because it was a small population of cells that had such a specific effect”, said co-author of the work Nathaniel Heintz of the Rockefeller University in New York. “This internal hormone gets regulated in many different contexts; in this particular context, it works through the prefrontal cortex to help modulate social and sexual behaviour in female mice. (via BBC News - ‘Love hormone’ controls sexual behaviour in mice)

Free will might have nothing to do with the universe outside and everything to do with how the brain enables or disables our behaviour and thoughts. What if free will relies on the internal, on how successfully the brain generates and sustains the physiological, cognitive and emotional dimensions of our bodies and minds – and has nothing to do with the external at all?

How new brain implants can boost free will – Walter Glannon – Aeon
Belief in Free Will Not Threatened by Neuroscience - A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.
Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).
Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.
 
As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.
(via Belief in Free Will Not Threatened by Neuroscience | WIRED)

Belief in Free Will Not Threatened by Neuroscience
-
A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.

Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).

Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.

As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.

(via Belief in Free Will Not Threatened by Neuroscience | WIRED)

Neuroscientists identify key role of language gene
-
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech. Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice. The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study. “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says. Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany. All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene. In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons. (via Neuroscientists identify key role of language gene — ScienceDaily)

Neuroscientists identify key role of language gene
-
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech. Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice. The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study. “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says. Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany. All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene. In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons. (via Neuroscientists identify key role of language gene — ScienceDaily)

Woman of 24 found to have no cerebellum in her brain
-
DON’T mind the gap. A woman has reached the age of 24 without anyone realising she was missing a large part of her brain. The case highlights just how adaptable the organ is. The discovery was made when the woman was admitted to the Chinese PLA General Hospital of Jinan Military Area Command in Shandong Province complaining of dizziness and nausea. She told doctors she’d had problems walking steadily for most of her life, and her mother reported that she hadn’t walked until she was 7 and that her speech only became intelligible at the age of 6. Doctors did a CAT scan and immediately identified the source of the problem – her entire cerebellum was missing (see scan, below left). The space where it should be was empty of tissue. Instead it was filled with cerebrospinal fluid, which cushions the brain and provides defence against disease. The cerebellum – sometimes known as the “little brain” – is located underneath the two hemispheres. It looks different from the rest of the brain because it consists of much smaller and more compact folds of tissue. It represents about 10 per cent of the brain’s total volume but contains 50 per cent of its neurons. Although it is not unheard of to have part of your brain missing, either congenitally or from surgery, the woman joins an elite club of just nine people who are known to have lived without their entire cerebellum. A detailed description of how the disorder affects a living adult is almost non-existent, say doctors from the Chinese hospital, because most people with the condition die at a young age and the problem is only discovered on autopsy (Brain, doi.org/vh7). (via Woman of 24 found to have no cerebellum in her brain - health - 10 September 2014 - New Scientist)

Woman of 24 found to have no cerebellum in her brain
-
DON’T mind the gap. A woman has reached the age of 24 without anyone realising she was missing a large part of her brain. The case highlights just how adaptable the organ is. The discovery was made when the woman was admitted to the Chinese PLA General Hospital of Jinan Military Area Command in Shandong Province complaining of dizziness and nausea. She told doctors she’d had problems walking steadily for most of her life, and her mother reported that she hadn’t walked until she was 7 and that her speech only became intelligible at the age of 6. Doctors did a CAT scan and immediately identified the source of the problem – her entire cerebellum was missing (see scan, below left). The space where it should be was empty of tissue. Instead it was filled with cerebrospinal fluid, which cushions the brain and provides defence against disease. The cerebellum – sometimes known as the “little brain” – is located underneath the two hemispheres. It looks different from the rest of the brain because it consists of much smaller and more compact folds of tissue. It represents about 10 per cent of the brain’s total volume but contains 50 per cent of its neurons. Although it is not unheard of to have part of your brain missing, either congenitally or from surgery, the woman joins an elite club of just nine people who are known to have lived without their entire cerebellum. A detailed description of how the disorder affects a living adult is almost non-existent, say doctors from the Chinese hospital, because most people with the condition die at a young age and the problem is only discovered on autopsy (Brain, doi.org/vh7). (via Woman of 24 found to have no cerebellum in her brain - health - 10 September 2014 - New Scientist)

Following fast on the heels of the Baumeister paper, the psychologists Paul Rozin and Edward Royzman of the University of Pennsylvania invoked the term ‘negativity bias’ to reflect their finding that negative events are especially contagious. The Penn researchers give the example of brief contact with a cockroach, which ‘will usually render a delicious meal inedible’, as they say in a 2001 paper. ‘The inverse phenomenon – rendering a pile of cockroaches on a platter edible by contact with one’s favourite food – is unheard of. More modestly, consider a dish of a food that you are inclined to dislike: lima beans, fish, or whatever. What could you touch to that food to make it desirable to eat – that is, what is the anti-cockroach? Nothing!’ When it comes to something negative, minimal contact is all that’s required to pass on the essence, they argue.

Praise feels good, but negativity is stronger – Jacob Burak – Aeon
read of the day: Outlook: gloomy
-
Humans are wired for bad news, angry faces and sad memories. Is this negativity bias useful or something to overcome?
-
I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why? Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it. Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first. 

go read it..

(via Praise feels good, but negativity is stronger – Jacob Burak – Aeon)

read of the day: Outlook: gloomy
-
Humans are wired for bad news, angry faces and sad memories. Is this negativity bias useful or something to overcome?
-
I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why? Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it. Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first.

go read it..

(via Praise feels good, but negativity is stronger – Jacob Burak – Aeon)