215 posts tagged neuroscience
Neuroscientists identify key role of language gene
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech. Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice. The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study. “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says. Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany. All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene. In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons. (via Neuroscientists identify key role of language gene — ScienceDaily)
Woman of 24 found to have no cerebellum in her brain
DON’T mind the gap. A woman has reached the age of 24 without anyone realising she was missing a large part of her brain. The case highlights just how adaptable the organ is. The discovery was made when the woman was admitted to the Chinese PLA General Hospital of Jinan Military Area Command in Shandong Province complaining of dizziness and nausea. She told doctors she’d had problems walking steadily for most of her life, and her mother reported that she hadn’t walked until she was 7 and that her speech only became intelligible at the age of 6. Doctors did a CAT scan and immediately identified the source of the problem – her entire cerebellum was missing (see scan, below left). The space where it should be was empty of tissue. Instead it was filled with cerebrospinal fluid, which cushions the brain and provides defence against disease. The cerebellum – sometimes known as the “little brain” – is located underneath the two hemispheres. It looks different from the rest of the brain because it consists of much smaller and more compact folds of tissue. It represents about 10 per cent of the brain’s total volume but contains 50 per cent of its neurons. Although it is not unheard of to have part of your brain missing, either congenitally or from surgery, the woman joins an elite club of just nine people who are known to have lived without their entire cerebellum. A detailed description of how the disorder affects a living adult is almost non-existent, say doctors from the Chinese hospital, because most people with the condition die at a young age and the problem is only discovered on autopsy (Brain, doi.org/vh7). (via Woman of 24 found to have no cerebellum in her brain - health - 10 September 2014 - New Scientist)
read of the day: Outlook: gloomy
Humans are wired for bad news, angry faces and sad memories. Is this negativity bias useful or something to overcome?
I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why? Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it. Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first.
go read it..
So a team of neuroscientists sent a message from the brain of one person in India, to the brains of three people in France, using brainwave-reading equipment and the Internet. Yes, really. The process is slow and cumbersome. It also doesn’t make use of any bleeding-edge technology. Instead, it puts together neurorobotics software and hardware that have been developed by several labs in recent years. We’re not predicting that this will have practical applications, or society-changing implications, any time soon. Still, it’s pretty amusing that somebody did this, and we’re here to give you the step-by-step instructions on how. To wit: The emitter—we’re using the vocab and italics from the original paper because they are awesome—wears an EEG cap on her scalp that records the electrical activity in her brain. The cap communicates wirelessly with a laptop that shows, on its screen, a white circle on a black background. The emitter translates the message she wants to send into an obscure five-bit binary system called Bacon’s cipher, which is more compact than the binary code that computers use. The emitter now has to enter that binary string into the laptop using her thoughts. She does this by using her thoughts to move the white circle on-screen to different corners of the screen. (Upper right corner for “1,” bottom right corner for “0.”) This part of the process takes advantage of technology that several labs have developed, to allow people with paralysis to control computer cursors or robot arms. The emitter’s binary message gets sent over the Internet, yay. The receivers sit inside a transcranial magnetic stimulation machine that’s able to send electromagnetic pulses through people’s skulls. The pulses make the receivers see flashes of light in their peripheral vision that aren’t actually there. In addition, the machine has a robotic arm that’s able to aim at different places on the receivers’ skulls. The results are phantom flashes (called phosphenes) that seem to show up in different positions in the air, which is not spooky at all, no. As soon as the receivers’ machine gets the emitter’s binary message over the Internet, the machine gets to work. It moves its robotic arm around, sending phosphenes to the receivers at different positions on their skulls. Flashes appearing in one position correspond to 1s in the emitter’s message, while flashes appearing in another position correspond to 0s. We don’t know how the receivers keep track of all that flashing. Perhaps they take notes using a pen and paper. Whew, that’s a lot of work to give your friends a holler. The research team, including neuroscientists and engineers from universities and startups in Europe and the U.S., understandably sent only two messages in this manner: “hola” and “ciao.” Imagine trying to send “bonjour” or “good morning.”
The recent release of Susan Greenfield’s new book and the film Lucy, both of which are dependent on tired misconceptions or dubious theories about the brain, suggest one worrying conclusion: we are running out of myths about the brain. So here are some new ones, to keep things ‘mysterious’
One of the best things about being a neuroscientist used to be the aura of mystery around it. It was once so mysterious that some people didn’t even know it was a thing. When I first went to university and people asked what I studied, they thought I was saying I was a “Euroscientist”, which is presumably someone who studies the science of Europe. I’d get weird questions such as “what do you think of Belgium?” and I’d have to admit that, in all honesty, I never think of Belgium. That’s how mysterious neuroscience was, once. Of course, you could say this confusion was due to my dense Welsh accent, or the fact that I only had the confidence to talk to strangers after consuming a fair amount of alcohol, but I prefer to go with the mystery. It’s not like that any more. Neuroscience is “mainstream” now, to the point where the press coverage of it can be studied extensively. When there’s such a thing as Neuromarketing (well, there isn’t actually such a thing, but there’s a whole industry that would claim otherwise), it’s impossible to maintain that neuroscience is “cool” or “edgy”. It’s a bad time for us neurohipsters (which are the same as regular hipsters, except the designer beards are on the frontal lobes rather than the jaw-line). One way that we professional neuroscientists could maintain our superiority was by correcting misconceptions about the brain, but lately even that avenue looks to be closing to us. The recent film Lucy is based on the most classic brain misconception: that we only use 10% of our brain. But it’s had a considerable amount of flack for this already, suggesting that many people are wise to this myth. We also saw the recent release of Susan Greenfield’s new book Mind Change, all about how technology is changing (damaging?) our brains. This is a worryingly evidence-free but very common claim by Greenfield. Depressingly common, as this blog has pointed out many times. But now even the non-neuroscientist reviewers aren’t buying her claims.
Neurons reveal the brain’s learning limit
Carnegie Mellon University, Stanford University, University of Pittsburgh Original Study
Scientists have discovered a fundamental constraint in the brain that may explain why it’s easier to learn a skill that’s related to an ability you already have. For example, a trained pianist can learn a new melody easier than learning how to hit a tennis serve. As reported in Nature, the researchers found for the first time that there are limitations on how adaptable the brain is during learning and that these restrictions are a key determinant for whether a new skill will be easy or difficult to learn. Understanding how the brain’s activity can be “flexed” during learning could eventually be used to develop better treatments for stroke and other brain injuries. Lead author Patrick T. Sadtler, a Ph.D. candidate in the University of Pittsburgh department of bioengineering, compared the study’s findings to cooking. “Suppose you have flour, sugar, baking soda, eggs, salt, and milk. You can combine them to make different items—bread, pancakes, and cookies—but it would be difficult to make hamburger patties with the existing ingredients,” Sadtler says. “We found that the brain works in a similar way during learning. We found that subjects were able to more readily recombine familiar activity patterns in new ways relative to creating entirely novel patterns.” (via Neurons reveal the brain’s learning limit - Futurity)
Mouse memories ‘flipped’ from fearful to cheerful
By artificially activating circuits in the brain, scientists have turned negative memories into positive ones. They gave mice bad memories of a place, then made them good - or vice versa - without ever returning to that place. Neurons storing the “place” memory were re-activated in a different emotional context, modifying the association. Although unlikely to be applied in humans with traumatic memories, the work sheds new light on the details of how emotional memories form and change. The research is is published in the journal Nature. (via BBC News - Mouse memories ‘flipped’ from fearful to cheerful)