170 posts tagged neuroscience
Neuroscientists monitor inhibitory neurons that link sense of smell with memory and cognition in mice, shaping perception from experiences
Odors have a way of connecting us with moments buried deep in our past. But researchers have long wondered how the process works in reverse: how do our memories shape the way sensory information is collected? In work published in Nature Neuroscience, scientists from Cold Spring Harbor Laboratory (CSHL) demonstrate for the first time a way to observe this process in awake animals. The team, led by Assistant Professor Stephen Shea, was able to measure the activity of a group of inhibitory neurons that links the odor-sensing area of the brain with brain areas responsible for thought and cognition. This connection provides feedback so that memories and experiences can alter the way smells are interpreted. The inhibitory neurons that forge the link are known as granule cells. They are found in the core of the olfactory bulb, the area of the mouse brain responsible for receiving odor information from the nose. Granule cells in the olfactory bulb receive inputs from areas deep within the brain involved in memory formation and cognition. Granule cells relay the information they receive from neurons involved in memory and cognition back to the olfactory bulb. There, the granule cells inhibit the neurons that receive sensory inputs. In this way, “the granule cells provide a way for the brain to ‘talk’ to the sensory information as it comes in,” explains Shea. “You can think of these cells as conduits which allow experiences to shape incoming data.” Why might an animal want to inhibit or block out specific parts of a stimulus, like an odor? Every scent is made up of hundreds of different chemicals, and “granule cells might help animals to emphasize the important components of complex mixtures,” says Shea. For example, an animal might have learned through experience to associate a particular scent, such as a predator’s urine, with danger. But each encounter with the smell is likely to be different. Maybe it is mixed with the smell of pine on one occasion and seawater on another. Granule cells provide the brain with an opportunity to filter away the less important odors and to focus sensory neurons only on the salient part of the stimulus. Now that it is possible to measure the activity of granule cells in awake animals, Shea and his team are eager to look at how sensory information changes when the expectations and memories associated with an odor change. “The interplay between a stimulus and our expectations is truly the merger of ourselves with the world. It exciting to see just how the brain mediates that interaction,” says Shea. This work was supported by the Klingenstein fellowship and a fellowship from the Natural Sciences and Engineering Research Council of Canada.
What is consciousness and how can a brain, a mere collection of neurons, create it? Michael Graziano, on the neuroscience faculty at Princeton University, is developing a theoretical and experimental approach to these questions. The theory begins with the ability to attribute awareness to others. The human brain has a complex circuitry that allows it to be socially intelligent. One function of this circuitry is to attribute a state of awareness to others: to build the intuition that person Y is aware of thing X. In Graziano’s hypothesis, the machinery that attributes awareness to others also helps attribute the property to oneself. The theory also draws on the relationship between awareness and attention (the brain’s data-handling method of focusing resources on a limited set of signals). Awareness may act as though it were the brain’s cartoon sketch of its own state of attention. That cartoon sketch is sometimes inaccurate, and it is those moments of inaccuracy — when awareness and attention become dissociated — that reveal most about the underlying mechanisms. Through these perspectives Graziano hopes to understand awareness and consciousness as part of the information-processing toolkit used by brains. One possible ultimate benefit from this type of research, perhaps decades in the future, is an artificial intelligence that has the human-like social capability to attribute awareness to itself and to others – a machine that understands what it means to have a mind.
Many have written of the experience of mathematical beauty as being comparable to that derived from the greatest art. This makes it interesting to learn whether the experience of beauty derived from such a highly intellectual and abstract source as mathematics correlates with activity in the same part of the emotional brain as that derived from more sensory, perceptually based, sources. To determine this, we used functional magnetic resonance imaging (fMRI) to image the activity in the brains of 15 mathematicians when they viewed mathematical formulae which they had individually rated as beautiful, indifferent or ugly. Results showed that the experience of mathematical beauty correlates parametrically with activity in the same part of the emotional brain, namely field A1 of the medial orbito-frontal cortex (mOFC), as the experience of beauty derived from other sources.
THIS: -> Your Brain in Love
Cupid’s arrows, laced with neurotransmitters, find their marks
Men and women can now thank a dozen brain regions for their romantic fervor. Researchers have revealed the fonts of desire by comparing functional MRI studies of people who indicated they were experiencing passionate love, maternal love or unconditional love. Together, the regions release neurotransmitters and other chemicals in the brain and blood that prompt greater euphoric sensations such as attraction and pleasure. Conversely, psychiatrists might someday help individuals who become dangerously depressed after a heartbreak by adjusting those chemicals. Passion also heightens several cognitive functions, as the brain regions and chemicals surge. “It’s all about how that network interacts,” says Stephanie Ortigue, an assistant professor of psychology at Syracuse University, who led the study. The cognitive functions, in turn, “are triggers that fully activate the love network.” Tell that to your sweetheart on Valentine’s Day. (via Your Brain in Love - Scientific American)
USC neuroscientists have systematically created the first map of the core white-matter “scaffold” (connections) of the human brain — the critical communications network that supports brain function.
Their work, published Feb. 11 in the open-access journal Frontiers in Human Neuroscience, has major implications for understanding brain injury and disease, the researchers say.
By detailing the connections that have the greatest influence over all other connections, the researchers offer a landmark first map of core white matter pathways and also show which connections may be most vulnerable to damage.
“We coined the term white matter ‘scaffold’ because this network defines the information architecture which supports brain function,” said senior author John Darrell Van Horn of the USC Institute for Neuroimaging and Informatics and the Laboratory of Neuro Imaging.
“While all connections in the brain have their importance, there are particular links which are the major players,” Van Horn said.
Using MRI data from a large sample of 110 individuals, lead author Andrei Irimia, also of the USC Institute for Neuroimaging and Informatics, and Van Horn systematically simulated the effects of damaging each white matter pathway.
A new brain region that appears to help humans identify whether they have made bad decisions has been discovered by researchers. The size and shape of a large Brussels sprout, the ball of neural tissue seems to be crucial for the kind of flexible thought that allows us to consider switching to a more promising course of action. While other brain parts keep track of how well, or not, our decisions are working for us, the new structure is more outward-looking, and mulls over what we might have done instead. Scientists spotted the region, named the lateral frontal pole, after scanning the brains of healthy humans in two different ways. Further scans failed to find any comparable region in monkeys, suggesting the area is exclusive to humans. “We know there are differences between humans and monkeys. But it is surprising how many similarities there can be, and how a couple of differences can mean our behaviour is so far removed from them,” said Matthew Rushworth, a professor of cognitive neuroscience, who led the study at Oxford University. “There are a few brain areas that monitor how good our choices are, and that is a very sensible thing to have. But this region monitors how good the choices are that we didn’t take. It tells us how green the grass is on the other side of the fence.” The remarkable finding highlights how much scientists have to learn about the human brain and how cutting-edge lab techniques are redrawing the map of the most complex organ in the known universe. One expert who spoke to the Guardian said the work was “stunning” and could pave the way for fresh advances in understanding psychiatric diseases. Details of the work are published in the Neuron journal.
MIT neuroscientists have discovered how two neural circuits in the brain work together to control the formation of time-linked memories, such as the sound of skidding tires, followed by a car crash.* This is a critical ability that helps the brain to determine when it needs to take action to defend against a potential threat, says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the Jan. 23 issue of Science. “It’s important for us to be able to associate things that happen with some temporal gap,” says Tonegawa, who is a member of MIT’s Picower Institute for Learning and Memory. “For animals it is very useful to know what events they should associate, and what not to associate.” The interaction of these two circuits allows the brain to maintain a balance between becoming too easily paralyzed with fear and being too careless, which could result in being caught off guard by a predator or other threat. The paper’s lead authors are Picower Institute postdocs Takashi Kitamura and Michele Pignatelli.
In the blink of an eye
MIT neuroscientists find the brain can identify images seen for as little as 13 milliseconds.
Imagine seeing a dozen pictures flash by in a fraction of a second. You might think it would be impossible to identify any images you see for such a short time. However, a team of neuroscientists from MIT has found that the human brain can process entire images that the eye sees for as little as 13 milliseconds — the first evidence of such rapid processing speed. That speed is far faster than the 100 milliseconds suggested by previous studies. In the new study, which appears in the journal Attention, Perception, and Psychophysics, researchers asked subjects to look for a particular type of image, such as “picnic” or “smiling couple,” as they viewed a series of six or 12 images, each presented for between 13 and 80 milliseconds. “The fact that you can do that at these high speeds indicates to us that what vision does is find concepts. That’s what the brain is doing all day long — trying to understand what we’re looking at,” says Mary Potter, an MIT professor of brain and cognitive sciences and senior author of the study.This rapid-fire processing may help direct the eyes, which shift their gaze three times per second, to their next target, Potter says. “The job of the eyes is not only to get the information into the brain, but to allow the brain to think about it rapidly enough to know what you should look at next. So in general we’re calibrating our eyes so they move around just as often as possible consistent with understanding what we’re seeing,” she says. Other authors of the paper are former MIT postdoc Brad Wyble, now at Pennsylvania State University, postdoc Carl Hagmann, and research assistant Emily McCourt. (via In the blink of an eye - MIT News Office)
There’s a quiet revolution underway in theoretical physics. For as long as the discipline has existed, physicists have been reluctant to discuss consciousness, considering it a topic for quacks and charlatans. Indeed, the mere mention of the ‘c’ word could ruin careers. That’s finally beginning to change thanks to a fundamentally new way of thinking about consciousness that is spreading like wildfire through the theoretical physics community. And while the problem of consciousness is far from being solved, it is finally being formulated mathematically as a set of problems that researchers can understand, explore and discuss. Today, Max Tegmark, a theoretical physicist at the Massachusetts Institute of Technology in Cambridge, sets out the fundamental problems that this new way of thinking raises. He shows how these problems can be formulated in terms of quantum mechanics and information theory. And he explains how thinking about consciousness in this way leads to precise questions about the nature of reality that the scientific process of experiment might help to tease apart. Tegmark’s approach is to think of consciousness as a state of matter, like a solid, a liquid or a gas. “I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness,” he says. He goes on to show how the particular properties of consciousness might arise from the physical laws that govern our universe. And he explains how these properties allow physicists to reason about the conditions under which consciousness arises and how we might exploit it to better understand why the world around us appears as it does.