A Momentary Flow

Updating Worldviews one World at a time

Tag Results

341 posts tagged Brain

More recently and perhaps most importantly, it’s been found that people who learn a second language, even in adulthood, can better avoid cognitive decline in old age. In fact, when everything else is controlled for, bilinguals who come down with dementia and Alzheimer’s do so about four-and-a-half years later than monolinguals. Dr. Thomas Bak, a lecturer in the philosophy, psychology, and language sciences department at the University of Edinburgh, conducted the study and found that level of education and intelligence mattered less than learning a second language when it came to delaying cognitive decline. “It’s not the good memory that bilinguals have that is delaying cognitive decline,” Bak told me. “It’s their attention mechanism. Their ability to focus in on the details of language.”
Polyglots tend to be good at paying attention in a wide variety of ways, especially when performing visual tasks (like searching a scene or a list for a specific name or object) and when multitasking, which, according to Bak’s theory, is likely improved thanks to the practice of mentally switching between one’s native and foreign language while learning the foreign language.

For a Better Brain, Learn Another Language - The Atlantic

“Cognitive traps,” or simple mistakes in spelling or comprehension that our brains tend to make when taking linguistic shortcuts (such as how you can easily read “tihs senetcne taht is trerilby msispleld”), are better avoided when one speaks multiple languages. Multi-linguals might also be better decision-makers. According to a new study, they are more resistant to conditioning and framing techniques, making them less likely to be swayed by such language in advertisements or political campaign speeches. Those who speak multiple languages have also been shown to be more self-aware spenders, viewing “hypothetical” and “real” money (the perceived difference between money on a credit card and money in cold, hard cash) more similarly than monolinguals.

For a Better Brain, Learn Another Language - The Atlantic
For a Better Brain, Learn Another Language - The cognitive benefits of multilingualism  - There’s a certain sinking feeling one gets when thinking of the perfect thing to say just a moment too late. Perhaps a witty parting word could have made all the difference. There is no English word to express this feeling, but the French have the term l’esprit de l’escalier—translated, “stairwell wit”—for this very phenomenon. Nor is there an English word to describe the binge eating that follows an emotional blow, but the Germans have kummerspeck—“grief-bacon”—to do just that. If we had the Swedish word lagom—which means something is just right—the English explanation of Goldilocks’ perfectly temperate soup could have been a lot more succinct. Or the term koi no yokan, a poetic Japanese turn of phrase that expresses the feeling of knowing that you will soon fall in love with the person you have just met. It’s not love at first sight so much as an understanding that love is inevitable. Keats and Byron could have really used a word like that. There are many words that English speakers don’t have. Sometimes Anglophones take from other languages, but often, we have to explain our way around a specific feeling or emotion that doesn’t have its own word, never quite touching on it exactly. “The reason why we borrow words like savoir faire from French is because it’s not part of the culture [in the United States] and therefore that word did not evolve as part of our language,” says George Lakoff, a professor of cognitive science and linguistics at the University of California at Berkeley. (via For a Better Brain, Learn Another Language - The Atlantic)

For a Better Brain, Learn Another Language
-
The cognitive benefits of multilingualism
-
There’s a certain sinking feeling one gets when thinking of the perfect thing to say just a moment too late. Perhaps a witty parting word could have made all the difference. There is no English word to express this feeling, but the French have the term l’esprit de l’escalier—translated, “stairwell wit”—for this very phenomenon. Nor is there an English word to describe the binge eating that follows an emotional blow, but the Germans have kummerspeck—“grief-bacon”—to do just that. If we had the Swedish word lagom—which means something is just right—the English explanation of Goldilocks’ perfectly temperate soup could have been a lot more succinct. Or the term koi no yokan, a poetic Japanese turn of phrase that expresses the feeling of knowing that you will soon fall in love with the person you have just met. It’s not love at first sight so much as an understanding that love is inevitable. Keats and Byron could have really used a word like that. There are many words that English speakers don’t have. Sometimes Anglophones take from other languages, but often, we have to explain our way around a specific feeling or emotion that doesn’t have its own word, never quite touching on it exactly. “The reason why we borrow words like savoir faire from French is because it’s not part of the culture [in the United States] and therefore that word did not evolve as part of our language,” says George Lakoff, a professor of cognitive science and linguistics at the University of California at Berkeley. (via For a Better Brain, Learn Another Language - The Atlantic)

Each morning, we wake up and experience a rich explosion of consciousness — the bright morning sunlight, the smell of roast coffee and, for some of us, the warmth of the person lying next to us in bed. As the slumber recedes into the night, we awake to become who we are. The morning haze of dreams and oblivion disperses and lifts as recognition and recall bubble up the content of our memories into our consciousness. For the briefest of moments we are not sure who we are and then suddenly ‘I,’ the one that is awake, awakens. We gather our thoughts so that the ‘I’ who is conscious becomes the ‘me’ — the person with a past. The memories of the previous day return. The plans for the immediate future reformulate. The realization that we have things to get on with remind us that it is a workday. We become a person whom we recognize. The call of nature tells us it is time to visit the bathroom and en route we glance at the mirror. We take a moment to reflect. We look a little older, but we are still the same person who has looked in that same mirror every day since we moved in. We see our self in that mirror. This is who we are. The daily experience of the self is so familiar, and yet the brain science shows that this sense of the self is an illusion. Psychologist Susan Blackmore makes the point that the word ‘illusion’ does not mean that it does not exist — rather, an illusion is not what it seems. We all certainly experience some form of self, but what we experience is a powerful depiction generated by our brains for our own benefit.

The Self Illusion: How Our Social Brain Constructs Who We Are | Brain Pickings
Google makes us all dumber: The neuroscience of search engines -As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions - Ian Leslie
In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”
We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection. This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent. Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.” Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear. I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.
go read this..
(via Google makes us all dumber: The neuroscience of search engines - Salon.com)

Google makes us all dumber: The neuroscience of search engines
-
As search engines get better, we become lazier. We’re hooked on easy answers and undervalue asking good questions
-
Ian Leslie

In 1964, Pablo Picasso was asked by an interviewer about the new electronic calculating machines, soon to become known as computers. He replied, “But they are useless. They can only give you answers.”

We live in the age of answers. The ancient library at Alexandria was believed to hold the world’s entire store of knowledge. Today, there is enough information in the world for every person alive to be given three times as much as was held in Alexandria’s entire collection —and nearly all of it is available to anyone with an internet connection. This library accompanies us everywhere, and Google, chief librarian, fields our inquiries with stunning efficiency. Dinner table disputes are resolved by smartphone; undergraduates stitch together a patchwork of Wikipedia entries into an essay. In a remarkably short period of time, we have become habituated to an endless supply of easy answers. You might even say dependent. Google is known as a search engine, yet there is barely any searching involved anymore. The gap between a question crystallizing in your mind and an answer appearing at the top of your screen is shrinking all the time. As a consequence, our ability to ask questions is atrophying. Google’s head of search, Amit Singhal, asked if people are getting better at articulating their search queries, sighed and said: “The more accurate the machine gets, the lazier the questions become.” Google’s strategy for dealing with our slapdash questioning is to make the question superfluous. Singhal is focused on eliminating “every possible friction point between [users], their thoughts and the information they want to find.” Larry Page has talked of a day when a Google search chip is implanted in people’s brains: “When you think about something you don’t really know much about, you will automatically get information.” One day, the gap between question and answer will disappear. I believe we should strive to keep it open. That gap is where our curiosity lives. We undervalue it at our peril.

go read this..

(via Google makes us all dumber: The neuroscience of search engines - Salon.com)

Nancy Kanwisher: A neural portrait of the human mind

Brain imaging pioneer Nancy Kanwisher, who uses fMRI scans to see activity in brain regions (often her own), shares what she and her colleagues have learned: The brain is made up of both highly specialized components and general-purpose “machinery.” Another surprise: There’s so much left to learn. 

Free will might have nothing to do with the universe outside and everything to do with how the brain enables or disables our behaviour and thoughts. What if free will relies on the internal, on how successfully the brain generates and sustains the physiological, cognitive and emotional dimensions of our bodies and minds – and has nothing to do with the external at all?

How new brain implants can boost free will – Walter Glannon – Aeon
Belief in Free Will Not Threatened by Neuroscience - A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.
Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).
Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.
 
As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.
(via Belief in Free Will Not Threatened by Neuroscience | WIRED)

Belief in Free Will Not Threatened by Neuroscience
-
A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.

Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).

Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.

As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.

(via Belief in Free Will Not Threatened by Neuroscience | WIRED)