A Momentary Flow

Updating Worldviews one World at a time

Tag Results

891 posts tagged Science

Belief in Free Will Not Threatened by Neuroscience - A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.
Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).
Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.
 
As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.
(via Belief in Free Will Not Threatened by Neuroscience | WIRED)

Belief in Free Will Not Threatened by Neuroscience
-
A key finding from neuroscience research over the last few decades is that non-conscious preparatory brain activity appears to precede the subjective feeling of making a decision. Some neuroscientists, like Sam Harris, have argued that this shows our sense of free will is an illusion, and that lay people would realize this too if they were given a vivid demonstration of the implications of the science (see below). Books have even started to appear with titles like My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer J. Sternberg. However, in a new paper, a team led by Eddy Nahmias counter such claims. They believe that Harris and others (who they dub “willusionists”) make several unfounded assumptions about the basis of most people’s sense of free will. Using a series of vivid hypothetical scenarios based on Harris’ own writings, Nahmias and his colleagues tested whether people’s belief in free will really is challenged by “neuroprediction” – the idea of neuroscientists using brain activity to predict a person’s choices – and by the related notion that mental activity is no more than brain activity. The research involved hundreds of undergrads at Georgia State University in Atlanta. They were told about a piece of wearable brain imaging technology – a cap – available in the future that would allow neuroscientists to predict a person’s decisions before they made them. They also read a story about a woman named Jill who wore the cap for a month, and how scientists predicted her every choice, including her votes in elections.

Most of the students (80 per cent) agreed that this future technology was plausible, but they didn’t think it undermined Jill’s free will. Most of them only felt her free will was threatened if they were told that the neuroscientists manipulated Jill’s brain activity to alter her decisions. Similar results were found in a follow-up study in which the scenario descriptions made clear that “all human mental activity just is brain activity”, and in another that swapped the power of brain imaging technology for the mind reading skills of a psychic. In each case, students only felt that free will was threatened if Jill’s decisions were manipulated, not if they were merely predicted via her brain activity or via her mind and soul (by the psychic).

Nahmias and their team said their results showed that most people have a “theory-lite” view of free will – they aren’t bothered by claims about mental activity being reduced to neural activity, nor by the idea that such activity precedes conscious decision-making and is readable by scientists. “Most people recognise that just because ‘my brain made me do it,’ that does not mean that I didn’t do it of my own free will,” the researchers said.

As neuroscience evidence increasingly enters the courtroom, these new findings have important implications for understanding how such evidence might influence legal verdicts about culpability. An obvious limitation of the research is its dependence on students in Atlanta. It will be interesting to see if the same findings apply in other cultures.

(via Belief in Free Will Not Threatened by Neuroscience | WIRED)

Artificial intelligence program that learns like a child
-
Artificial intelligence programs may already be capable of specialized tasks like flying planes, winning Jeopardy, and giving you a hard time in your favorite video games, but even the most advanced offerings are no smarter than a typical four-year-old child when it comes to broader insights and comprehension. It makes sense, then, that researchers at the University of Gothenburg have developed a program that imitates a child’s cognitive development. “We have developed a program that can learn, for example, basic arithmetic, logic, and grammar without any pre-existing knowledge,” says Claes Strannegård. Starting from a set of simple and broad definitions meant to provide a cognitive model, this program gradually builds new knowledge based on previous knowledge. From that new knowledge it then draws new conclusions about rules and relations that govern the world, and it identifies new patterns to connect the insight to. The process is similar to how children develop intelligence. A child can intuit, for example, that if 2 x 0 = 0 and 3 x 0 = 0 then 5 x 0 will also equal 0, or they could draw the conclusion that the next number in the series “2, 5, 8” will be 11. And the same kinds of intuition carry across to other areas, such as grammar, where it’s easy to identify rules for standard verb conjugations from examples like sing becoming sang and run becoming ran in the past tense. “We postulate that children learn everything based on experiences and that they are always looking for general patterns,” Strannegård says. (via Artificial intelligence program that learns like a child)

Artificial intelligence program that learns like a child
-
Artificial intelligence programs may already be capable of specialized tasks like flying planes, winning Jeopardy, and giving you a hard time in your favorite video games, but even the most advanced offerings are no smarter than a typical four-year-old child when it comes to broader insights and comprehension. It makes sense, then, that researchers at the University of Gothenburg have developed a program that imitates a child’s cognitive development. “We have developed a program that can learn, for example, basic arithmetic, logic, and grammar without any pre-existing knowledge,” says Claes Strannegård. Starting from a set of simple and broad definitions meant to provide a cognitive model, this program gradually builds new knowledge based on previous knowledge. From that new knowledge it then draws new conclusions about rules and relations that govern the world, and it identifies new patterns to connect the insight to. The process is similar to how children develop intelligence. A child can intuit, for example, that if 2 x 0 = 0 and 3 x 0 = 0 then 5 x 0 will also equal 0, or they could draw the conclusion that the next number in the series “2, 5, 8” will be 11. And the same kinds of intuition carry across to other areas, such as grammar, where it’s easy to identify rules for standard verb conjugations from examples like sing becoming sang and run becoming ran in the past tense. “We postulate that children learn everything based on experiences and that they are always looking for general patterns,” Strannegård says. (via Artificial intelligence program that learns like a child)

What Is the Universe? Real Physics Has Some Mind-Bending Answers
-
Science says the universe could be a hologram, a computer program, a black hole or a bubble—and there are ways to check
-
The questions are as big as the universe and (almost) as old as time: Where did I come from, and why am I here? That may sound like a query for a philosopher, but if you crave a more scientific response, try asking a cosmologist. This branch of physics is hard at work trying to decode the nature of reality by matching mathematical theories with a bevy of evidence. Today most cosmologists think that the universe was created during the big bang about 13.8 billion years ago, and it is expanding at an ever-increasing rate. The cosmos is woven into a fabric we call space-time, which is embroidered with a cosmic web of brilliant galaxies and invisible dark matter. It sounds a little strange, but piles of pictures, experimental data and models compiled over decades can back up this description. And as new information gets added to the picture, cosmologists are considering even wilder ways to describe the universe—including some outlandish proposals that are nevertheless rooted in solid science:

The universe is a hologram

Look at a standard hologram, printed on a 2D surface, and you’ll see a 3D projection of the image. Decrease the size of the individual dots that make up the image, and the hologram gets sharper. In the 1990s, physicists realized that something like this could be happening with our universe.

Classical physics describes the fabric of space-time as a four-dimensional structure, with three dimensions of space and one of time. Einstein’s theory of general relativity says that, at its most basic level, this fabric should be smooth and continuous. But that was before quantum mechanics leapt onto the scene. While relativity is great at describing the universe on visible scales, quantum physics tells us all about the way things work on the level of atoms and subatomic particles. According to quantum theories, if you examine the fabric of space-time close enough, it should be made of teeny-tiny grains of information, each a hundred billion billion times smaller than a proton.

Stanford physicist Leonard Susskind and Nobel prize winner Gerard ‘t Hooft have each presented calculations showing what happens when you try to combine quantum and relativistic descriptions of space-time. They found that, mathematically speaking, the fabric should be a 2D surface, and the grains should act like the dots in a vast cosmic image, defining the “resolution” of our 3D universe. Quantum mechanics also tells us that these grains should experience random jitters that might occasionally blur the projection and thus be detectable. Last month, physicists at the U.S. Department of Energy’s Fermi National Accelerator Laboratory started collecting data with a highly sensitive arrangement of lasers and mirrors called the Holometer. This instrument is finely tuned to pick up miniscule motion in space-time and reveal whether it is in fact grainy at the smallest scale. The experiment should gather data for at least a year, so we may know soon enough if we’re living in a hologram.

The universe is a computer simulation

Just like the plot of the Matrix, you may be living in a highly advanced computer program and not even know it. Some version of this thinking has been debated since long before Keanu uttered his first “whoa”. Plato wondered if the world as we perceive it is an illusion, and modern mathematicians grapple with the reason math is universal—why is it that no matter when or where you look, 2 + 2 must always equal 4? Maybe because that is a fundamental part of the way the universe was coded.

In 2012, physicists at the University of Washington in Seattle said that if we do live in a digital simulation, there might be a way to find out. Standard computer models are based on a 3D grid, and sometimes the grid itself generates specific anomalies in the data. If the universe is a vast grid, the motions and distributions of high-energy particles called cosmic rays may reveal similar anomalies—a glitch in the Matrix—and give us a peek at the grid’s structure. A 2013 paper by MIT engineer Seth Lloyd builds the case for an intriguing spin on the concept: If space-time is made of quantum bits, the universe must be one giant quantum computer. Of course, both notions raise a troubling quandary: If the universe is a computer program, who or what wrote the code?

(via What Is the Universe? Real Physics Has Some Mind-Bending Answers | Science | Smithsonian)

Diversity is not only about bringing different perspectives to the table. Simply adding social diversity to a group makes people believe that differences of perspective might exist among them and that belief makes people change their behavior. Members of a homogeneous group rest somewhat assured that they will agree with one another; that they will understand one another’s perspectives and beliefs; that they will be able to easily come to a consensus. But when members of a group notice that they are socially different from one another, they change their expectations. They anticipate differences of opinion and perspective. They assume they will need to work harder to come to a consensus. This logic helps to explain both the upside and the downside of social diversity: people work harder in diverse environments both cognitively and socially. They might not like it, but the hard work can lead to better outcomes.

How Diversity Makes Us Smarter - Scientific American

The key to understanding the positive influence of diversity is the concept of informational diversity. When people are brought together to solve problems in groups, they bring different information, opinions and perspectives. This makes obvious sense when we talk about diversity of disciplinary backgrounds—think again of the interdisciplinary team building a car. The same logic applies to social diversity. People who are different from one another in race, gender and other dimensions bring unique information and experiences to bear on the task at hand. A male and a female engineer might have perspectives as different from one another as an engineer and a physicist—and that is a good thing.

How Diversity Makes Us Smarter - Scientific American
Neuroscientists identify key role of language gene
-
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech. Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice. The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study. “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says. Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany. All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene. In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons. (via Neuroscientists identify key role of language gene — ScienceDaily)

Neuroscientists identify key role of language gene
-
Neuroscientists have found that a gene mutation that arose more than half a million years ago may be key to humans’ unique ability to produce and understand speech. Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice. The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study. “This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says. Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany. All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene. In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons. (via Neuroscientists identify key role of language gene — ScienceDaily)

John Wilkins - Philosophy of Evolutionary Biology
-
The philosophy of biology is a subfield of philosophy of science, which deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science then began paying increasing attention to biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of DNA in 1953 to more recent advances in genetic engineering. Other key ideas include the reduction of all life processes to biochemical reactions, and the incorporation of psychology into a broader neuroscien

Evolution’s Random Paths Lead to One Place - A massive statistical study suggests that the final evolutionary outcome — fitness — is predictable. - In his fourth-floor lab at Harvard University, Michael Desai has created hundreds of identical worlds in order to watch evolution at work. Each of his meticulously controlled environments is home to a separate strain of baker’s yeast. Every 12 hours, Desai’s robot assistants pluck out the fastest-growing yeast in each world — selecting the fittest to live on — and discard the rest. Desai then monitors the strains as they evolve over the course of 500 generations. His experiment, which other scientists say is unprecedented in scale, seeks to gain insight into a question that has long bedeviled biologists: If we could start the world over again, would life evolve the same way? Many biologists argue that it would not, that chance mutations early in the evolutionary journey of a species will profoundly influence its fate. “If you replay the tape of life, you might have one initial mutation that takes you in a totally different direction,” Desai said, paraphrasing an idea first put forth by the biologist Stephen Jay Gould in the 1980s. Desai’s yeast cells call this belief into question. According to results published in Science in June, all of Desai’s yeast varieties arrived at roughly the same evolutionary endpoint (as measured by their ability to grow under specific lab conditions) regardless of which precise genetic path each strain took. It’s as if 100 New York City taxis agreed to take separate highways in a race to the Pacific Ocean, and 50 hours later they all converged at the Santa Monica pier. The findings also suggest a disconnect between evolution at the genetic level and at the level of the whole organism. Genetic mutations occur mostly at random, yet the sum of these aimless changes somehow creates a predictable pattern. The distinction could prove valuable, as much genetics research has focused on the impact of mutations in individual genes. For example, researchers often ask how a single mutation might affect a microbe’s tolerance for toxins, or a human’s risk for a disease. But if Desai’s findings hold true in other organisms, they could suggest that it’s equally important to examine how large numbers of individual genetic changes work in concert over time. “There’s a kind of tension in evolutionary biology between thinking about individual genes and the potential for evolution to change the whole organism,” said Michael Travisano, a biologist at the University of Minnesota. “All of biology has been focused on the importance of individual genes for the last 30 years, but the big take-home message of this study is that’s not necessarily important.” (via Yeast Study Suggests Genetics Are Random but Evolution Is Not | Simons Foundation)

Evolution’s Random Paths Lead to One Place
-
A massive statistical study suggests that the final evolutionary outcome — fitness — is predictable.
-
In his fourth-floor lab at Harvard University, Michael Desai has created hundreds of identical worlds in order to watch evolution at work. Each of his meticulously controlled environments is home to a separate strain of baker’s yeast. Every 12 hours, Desai’s robot assistants pluck out the fastest-growing yeast in each world — selecting the fittest to live on — and discard the rest. Desai then monitors the strains as they evolve over the course of 500 generations. His experiment, which other scientists say is unprecedented in scale, seeks to gain insight into a question that has long bedeviled biologists: If we could start the world over again, would life evolve the same way? Many biologists argue that it would not, that chance mutations early in the evolutionary journey of a species will profoundly influence its fate. “If you replay the tape of life, you might have one initial mutation that takes you in a totally different direction,” Desai said, paraphrasing an idea first put forth by the biologist Stephen Jay Gould in the 1980s. Desai’s yeast cells call this belief into question. According to results published in Science in June, all of Desai’s yeast varieties arrived at roughly the same evolutionary endpoint (as measured by their ability to grow under specific lab conditions) regardless of which precise genetic path each strain took. It’s as if 100 New York City taxis agreed to take separate highways in a race to the Pacific Ocean, and 50 hours later they all converged at the Santa Monica pier. The findings also suggest a disconnect between evolution at the genetic level and at the level of the whole organism. Genetic mutations occur mostly at random, yet the sum of these aimless changes somehow creates a predictable pattern. The distinction could prove valuable, as much genetics research has focused on the impact of mutations in individual genes. For example, researchers often ask how a single mutation might affect a microbe’s tolerance for toxins, or a human’s risk for a disease. But if Desai’s findings hold true in other organisms, they could suggest that it’s equally important to examine how large numbers of individual genetic changes work in concert over time. “There’s a kind of tension in evolutionary biology between thinking about individual genes and the potential for evolution to change the whole organism,” said Michael Travisano, a biologist at the University of Minnesota. “All of biology has been focused on the importance of individual genes for the last 30 years, but the big take-home message of this study is that’s not necessarily important.” (via Yeast Study Suggests Genetics Are Random but Evolution Is Not | Simons Foundation)