86 posts tagged Ai
It’s time to build a bionic brain for smarter research
The structure of the brain reveals a network of massively interconnected electrochemically active cells. It is known that information can be represented by changes of state within this network, but that statement falls far short of revealing how the brain supports thought, feelings, memory, intention and action. How then to solve this problem? The physicist Richard Feynmann famously said “What I cannot create, I do not understand”. A report published today by the Australian Academy of Science proposes applying this approach to the study of the brain by creating a simulating the biological thought process within a new computer system. In short: build a bionic brain. The device could be truly revolutionary. A bionic brain built on biological principles could suggest entirely new approaches to artificial intelligence. It would be a new computer resource inspiring new solutions for fail-safe smart machines. Simulating thought in a bionic brain would also provide a whole new tool with which to investigate the operation of neural circuits. A bionic brain would provide a whole new approach to the study of not just normal mental function, but also mental disorder such as psychosis, addiction and anxiety. It would provide a new resource to examine the causes of these disorders and even test proposed therapies. Ultimately a bionic brain may even provide a solution for victims of brain damage or stroke by outsourcing some aspects of brain function to a prosthetic device. (via It’s time to build a bionic brain for smarter research)
Artificial intelligence: How to turn Siri into Samantha
“Siri, why do you struggle with conversations?”
“I don’t know what you mean - how about a web search for it?”
If you want the latest football scores, to add meetings to your calendar or launch an app, today’s virtual assistants are relatively good at understanding your voice and doing what’s asked. But try to have the type of natural conversation seen in sci-fi movies featuring artificial intelligence systems - from HAL in 2001 to the sultry-voiced operating system Samantha in Spike Jonze’s Her - and you’ll find your device about as smart as a waterproof teabag. “Google and Apple are painfully aware that their systems are not getting better fast enough because right now Siri and Google Now and the other personal assistant type applications are all programmed by hand,” says Steve Young, professor of information engineering at the University of Cambridge. “If you speak to Siri about baseball it seems relatively intelligent, but if you ask it something much less common it doesn’t really do anything except for a web search. “That’s an indication that the programmers have been busy trying to anticipate what people want to ask about baseball but haven’t thought about people who ask about, for example, GPU chips because you don’t get many queries about that.” (via BBC News - Artificial intelligence: How to turn Siri into Samantha)
Supercomputer Takes 40 Minutes To Model 1 Second of Brain Activity
Despite rumors, the singularity, or point at which artificial intelligence can overtake human smarts, still isn’t quite here. One of the world’s most powerful supercomputers is still no match for the humble human brain, taking 40 minutes to replicate a single second of brain activity. Researchers in Germany and Japan used K, the fourth-most powerful supercomputer in the world, to simulate brain activity. With more than 700,000 processor cores and 1.4 million gigabytes of RAM, K simulated the interplay of 1.73 billion nerve cells and more than 10 trillion synapses, or junctions between brain cells. Though that may sound like a lot of brain cells and connections, it represents just 1 percent of the human brain’s network. The long-term goal is to make computing so fast that it can simulate the mind— brain cell by brain cell— in real-time. That may be feasible by the end of the decade, researcher Markus Diesmann, of the University of Freiburg, told the Telegraph. (via Supercomputer Takes 40 Minutes To Model 1 Second of Brain Activity | LiveScience)
The age of artificial intelligence is here
Computers can now learn from their mistakes and this will turn the digital world into a new era in 2014, according to the N.Y. Times print edition today. The vision of artificial intelligence is now real. The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term ‘computer crash’ obsolete. This all relates to the technology that would come when systems are self-aware; systems that perceives their environments and takes actions to maximize their chances of success. The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do base on the changing signals. A new generation of artificial intelligence systems will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning; the biometrics age is fast developing facial, iris, and palm sensory recognition and voice characteristics… ‘We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,’ said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits. (via The age of artificial intelligence is here - San Diego Technology | Examiner.com)
An Equation for Intelligence: Alex Wissner-Gross at TEDxBeaconStreet
What is the most intelligent way to behave? Dr. Wissner-Gross explains how the latest research findings in physics, computer science, and animal behavior suggest that the smartest actions — from the dawn of human tool use all the way up to modern business and financial strategy — are all driven by the single fundamental principle of keeping future options as open as possible. Consequently, he argues, intelligence itself may be viewed as an engine for maximizing future freedom of action. With broad implications for fields ranging from management and investing to artificial intelligence, Dr. Wissner-Gross’s message reveals a profound new connection between intelligence and freedom.
Dr. Alexander D. Wissner-Gross is an award-winning scientist, inventor, and entrepreneur. He serves as an Institute Fellow at the Harvard University Institute for Applied Computational Science and as a Research Affiliate at the MIT Media Laboratory.
Two humans - one Norwegian and one Indian - have been competing for the World Chess Championship. Neither of them would fancy their chances against the best computers. The machines have come a long way and their progress has taken us closer to achieving artificial intelligence.
In 1968 chess master David Levy made a bet that by 1978 no computer could beat him in a series of games. He won the bet. In fact, it took most of the 1980s before he was finally beaten. “After I won the first bout, I made a second bet for a period of five years. I stopped betting after that. At that point I could see what was coming.” In 1997, the best player in the world Garry Kasparov was beaten by the IBM computer Deep Blue in a controversial series. Today, the world’s best player Magnus Carlsen would be foolish to make a Levy-style bet. The best computers would beat him. But the progress that computers have made against one task - beating the best humans at chess - offers a lesson for the whole way people think about the future of artificial intelligence. (via BBC News - The unwinnable game)
A chatbot named Mitsuku has won the Loebner Prize 2013, announced over the weekend, beating out three other contestants for the top prize of a bronze medal and $4,000. Mitsuku’s creator is Steve Worswick, Mitsuku’s botmaster. But wait a minute. What is a chatbot? A chatbot is a humanlike character with conversational skills which is simulated through artificial intelligence. Eliza, back in 1964 and 1966, was the first step into programmed chatterbots, designed to simulate a conversation with one or more human users. The Eliza program was based on a human mode of interaction typified by a Rogerian therapist trained not to make any creative input to a conversation, but instead only to keep it going so that patients could explore their own feelings. “Talking to Rogerian therapist is very like talking to a brick wall with a slightly clever echo,” wrote Mike James in iProgrammer. But wait another minute. What is the Loebner Prize? This is an annual competition created by businessman Hugh Loebner. The competition is an embodiment of the Turing-test affair; the chatbots try to fool the judges into assessing their answers are from humans. With reference to mathematician Alan Turing in the 1950s, the contest sets out to stage an event around Turing’s suggestion that if a computer answered questions as convincing as a human could, then the machine could reasonably be said to be thinking. The Turing Test emerged as a way to assess the intelligence of computer programs. Loebner has offered a prize of $100,000 for the computer program that meets Turing’s standard for artificial intelligence but no chatbot creator has ever achieved that level and the top-tier cash has gone unclaimed. The four finalists at the 2013 event in Northern Ireland had to undergo four rounds of questioning with the competition judges. The Mitsuku chatbot as conversationalist was declared the most convincing.
IBM takes another step towards to brain-like computing
Researchers at IBM have developed a programming model for its theoretical chip architecture based on the functions of the brain
IBM has today claimed a significant breakthrough in what it calls “cognitive computing” - the development of fundamantel computing components that mimic the functions of the brain. Back in 2011, the IT giant said that it had successfully built a simulation of a theoretical chip architecture based on brain’s system of neurons and synapses. In conventional computers, memory and processing are handled by separate components. In IBM’s theoretical chip architecture, these functions are performed by a network of simulated neurons and synapses. This, the company claims, will allow computers to process large volumes of sensory data input much more efficiently than is possible today. “Increasingly, computers will gather huge quantities of data, reason over the data and learn from their interactions with information and people,” wrote Dr Dharmendra S. Modha, principal investigator and senior manager at IBM Research. “These new capabilities will help us penetrate complexities and make better decisions about everything from how to manage cities to how to solve confounding business problems. (via IBM takes another step towards to brain-like computing | Information Age)