81 posts tagged Ai
An Equation for Intelligence: Alex Wissner-Gross at TEDxBeaconStreet
What is the most intelligent way to behave? Dr. Wissner-Gross explains how the latest research findings in physics, computer science, and animal behavior suggest that the smartest actions — from the dawn of human tool use all the way up to modern business and financial strategy — are all driven by the single fundamental principle of keeping future options as open as possible. Consequently, he argues, intelligence itself may be viewed as an engine for maximizing future freedom of action. With broad implications for fields ranging from management and investing to artificial intelligence, Dr. Wissner-Gross’s message reveals a profound new connection between intelligence and freedom.
Dr. Alexander D. Wissner-Gross is an award-winning scientist, inventor, and entrepreneur. He serves as an Institute Fellow at the Harvard University Institute for Applied Computational Science and as a Research Affiliate at the MIT Media Laboratory.
Two humans - one Norwegian and one Indian - have been competing for the World Chess Championship. Neither of them would fancy their chances against the best computers. The machines have come a long way and their progress has taken us closer to achieving artificial intelligence.
In 1968 chess master David Levy made a bet that by 1978 no computer could beat him in a series of games. He won the bet. In fact, it took most of the 1980s before he was finally beaten. “After I won the first bout, I made a second bet for a period of five years. I stopped betting after that. At that point I could see what was coming.” In 1997, the best player in the world Garry Kasparov was beaten by the IBM computer Deep Blue in a controversial series. Today, the world’s best player Magnus Carlsen would be foolish to make a Levy-style bet. The best computers would beat him. But the progress that computers have made against one task - beating the best humans at chess - offers a lesson for the whole way people think about the future of artificial intelligence. (via BBC News - The unwinnable game)
A chatbot named Mitsuku has won the Loebner Prize 2013, announced over the weekend, beating out three other contestants for the top prize of a bronze medal and $4,000. Mitsuku’s creator is Steve Worswick, Mitsuku’s botmaster. But wait a minute. What is a chatbot? A chatbot is a humanlike character with conversational skills which is simulated through artificial intelligence. Eliza, back in 1964 and 1966, was the first step into programmed chatterbots, designed to simulate a conversation with one or more human users. The Eliza program was based on a human mode of interaction typified by a Rogerian therapist trained not to make any creative input to a conversation, but instead only to keep it going so that patients could explore their own feelings. “Talking to Rogerian therapist is very like talking to a brick wall with a slightly clever echo,” wrote Mike James in iProgrammer. But wait another minute. What is the Loebner Prize? This is an annual competition created by businessman Hugh Loebner. The competition is an embodiment of the Turing-test affair; the chatbots try to fool the judges into assessing their answers are from humans. With reference to mathematician Alan Turing in the 1950s, the contest sets out to stage an event around Turing’s suggestion that if a computer answered questions as convincing as a human could, then the machine could reasonably be said to be thinking. The Turing Test emerged as a way to assess the intelligence of computer programs. Loebner has offered a prize of $100,000 for the computer program that meets Turing’s standard for artificial intelligence but no chatbot creator has ever achieved that level and the top-tier cash has gone unclaimed. The four finalists at the 2013 event in Northern Ireland had to undergo four rounds of questioning with the competition judges. The Mitsuku chatbot as conversationalist was declared the most convincing.
IBM takes another step towards to brain-like computing
Researchers at IBM have developed a programming model for its theoretical chip architecture based on the functions of the brain
IBM has today claimed a significant breakthrough in what it calls “cognitive computing” - the development of fundamantel computing components that mimic the functions of the brain. Back in 2011, the IT giant said that it had successfully built a simulation of a theoretical chip architecture based on brain’s system of neurons and synapses. In conventional computers, memory and processing are handled by separate components. In IBM’s theoretical chip architecture, these functions are performed by a network of simulated neurons and synapses. This, the company claims, will allow computers to process large volumes of sensory data input much more efficiently than is possible today. “Increasingly, computers will gather huge quantities of data, reason over the data and learn from their interactions with information and people,” wrote Dr Dharmendra S. Modha, principal investigator and senior manager at IBM Research. “These new capabilities will help us penetrate complexities and make better decisions about everything from how to manage cities to how to solve confounding business problems. (via IBM takes another step towards to brain-like computing | Information Age)
Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy
Indiegogo fundraiser for Roman V. Yampolskiy‘s book. The book will present research aimed at making sure that emerging superintelligence is beneficial to humanity. Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. This book, “Artificial Superintelligence: A Futuristic Approach,” will directly address this issue and consolidate research aimed at making sure that emerging superintelligence is beneficial to humanity. (via Artificial Superintelligence: A Futuristic Approach | KurzweilAI)
In the world of self-driving cars and autonomous vehicle technology, Google gets most of the attention, but it’s far from being the only player in the field. Earlier this month, Mobileye, the Israeli and Dutch maker of advanced driver assistance technologies, claimed that self-driving cars “could be on the road by 2016.” Rather than Google cars’ array of radar, cameras, sensors and laser-based range finders, Mobileye wants to offer autonomous driving capability at a more affordable price point by using mainstream cameras that cost only a few hundred dollars. While cars using Mobileye’s systems, like the Audi A7, aren’t quite as “autonomous” as Google vehicles, they could help advanced driver assistance technology make it onto the road long before 2025 — the date industry experts expect driverless cars to go mainstream. With its intelligent, camera-based “traffic assist” technology expected to begin arriving this summer thanks to partnerships with five major automakers, the automotive A.I. company is looking to take advantage while its stock is still high, so to speak.
Almost as soon as it arrived as a concept, artificial intelligence has occupied a hefty portion of humans’ technological anxieties. We worry about machines taking over our jobs (and/or our emotions, and/or our lives). Even as we appreciate the ease that AI has brought to our lives — the commercial recommendations that recognize our desires, the language processing that understands our curiosities, the information indexing that satisfies them — we have been conditioned to be suspicious of intelligence that doesn’t come in the form most familiar to us: the folds of an organic brain. But what happens 10 or 20 or 50 years down the road, when artificial intelligence has expanded its capabilities — and, presumably, its role in our lives? What will that mean for humans, as a culture and as a species? In the video above, PBS’s Off Book series explores those questions. While humans have long turned to their tools to expand their capabilities, what will happen when those tools are themselves intelligent — when those tools, perhaps, have consciousness and consciences of their own? “Once somebody develops a good AI program,” NYU’s Gary Marcus says, “it doesn’t just replace one worker. It might replace millions of workers.” And that, he continued, may bring another concern when it comes to our relationship with our notional robot overlords: “What happens if they decide that we’re not useful anymore? I think we do need to think about how to build machines that are ethical. The smarter the machines get, the more important that is.”
Those who saw IBM’s Watson defeat former winners on Jeopardy! in 2011 might be forgiven for thinking that artificially intelligent computer systems are a lot brighter than they are. While Watson was able to cope with the highly stylized questions posed during the quiz, AI systems are still left wanting when it comes to commonsense. This was one of the factors that led researchers to find that one of the best available AI systems has the average IQ of a four-year-old.
To see just how intelligent AI systems are, a team of artificial and natural knowledge researchers at the University of Illinois as Chicago (UIC) subjected ConceptNet 4 to the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, which is a standard IQ test for young children. ConceptNet 4 is an AI system developed at MIT that relies on a commonsense knowledge base created from facts contributed by thousands of people across the Web.
While the UIC researchers found that ConceptNet 4 is on average about as smart as a four-year-old child, the system performed much better at some portions of the test than others. While it did well on vocabulary and in recognizing similarities, its overall score was brought down dramatically by a bad result in comprehension, or commonsense “why” questions.
“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study. “We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of eight.” (via Top notch AI system about as smart as a four-year-old, lacks commonsense)