70 posts tagged Ai
Kevin Drum on why the robots will rise up and take all our jobs
We’ve had technologies that save labor and increase productivity for years. What makes artificial intelligence different?
Kevin Drum: The difference is that, in the Industrial Revolution, we got big productivity increases from steam engines but there were still people required to run those machines. We had a huge increase in the amount of stuff you could make, but you needed people to design the machines, and make the machines, and use the machines.
With the digital revolution, the difference is that smart machines provide both power and intelligence. You don’t need human beings for anything anymore. You don’t need them for power, or for the intelligence to use the power. It puts everyone out of work eventually. Because smart machines will become as smart as human beings, there simply is not a job that a machine can’t do on its own. (via Kevin Drum on why the robots will rise up and take all our jobs)
When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback? (via Steven Poole – On algorithms)
THE MACHINE - Clip - Tribeca Film Festival - director Caradog James - Midnight Section
We live in an age where smartphones can tell us when we need to leave for the airport and Turing test competitors inch ever closer to a passing grade, but true artificial intelligence remains out there on the horizon, frustratingly out of reach. At the Tribeca Film Festival a new sci-fi action film imagines one way we might finally achieve that goal — and some of the moral and ethical problems we might not see coming. It’s called The Machine, and you’re going to want to see it.
The second feature from writer and director Caradog James, the film tells the story of Dr. Vincent McCarthy (Toby Stephens). It’s the near-future. A cold war with China has pushed the Western world into a continued economic depression, and building the first intelligent machines has become the new space race. McCarthy works for the United Kingdom’s Ministry of Defense, designing implants for brain-damaged soldiers. He’s a brilliant and driven man seemingly doing noble work — but there’s something darker there pushing him on. There’s also the matter of how well his research is going; there have been accidents along the way, and he’s treading in a particularly grey area of the moral spectrum.
Artificial intelligence is arguably the most useless technology that humans have ever aspired to possess. Actually, let me clarify. It would be useful to have a robot that could make independent decisions while, say, exploring a distant planet, or defusing a bomb. But the ultimate aspiration of AI was never just to add autonomy to a robot’s operating system. The idea wasn’t to enable a computer to search data faster by ‘understanding patterns’, or communicate with its human masters via natural language. The dream of AI was — and is — to create a machine that is conscious. AI means building a mechanical human being. And this goal, as supposedly rational technological projects go, is deeply strange.
Consider the ramifications of a conscious machine: one that thinks and feels like a human, an ‘electronic brain’ that dreams and ponders its own existence, falls in and out of love, writes sonnets under the moonlight, laughs when happy and cries when sad. What exactly would it be good for? What could be the point of spending billions of dollars and countless hours of precious research time in order to arrive at a replica of oneself?
Go read it..
(Machine learning is how a computer (yellow) carries out a new task (red). The program adds its prior training (green), makes predictions, and completes the task. The result: the machine gets smarter. Illustration: Darpa
So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government. (via So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves | Danger Room | Wired.com)
Mind of its own: building a human brain
A machine capable of thinking for itself and expressing emotion is being developed in Switzerland
At the end of last year a group of academics at the University of Cambridge asked a simple question: which developments in human technology pose ‘new, extinction-level risks to our species as a whole’?
The group, which included the philosopher Huw Price, the cosmologist Martin Rees and the founder of Skype, Jaan Tallinn, were setting up a research centre, the Cambridge Project for Existential Risk, to work out the answer, and to study those potential one-off species-ending events that are the stuff of scientists’ nightmares.
To whet the public’s appetite for destruction, they drew up a shortlist of man-made apocalyptic scenarios, which included climate change, biotechnology and nuclear war.
But it was the final item on the list, artificial intelligence (AI), that caught the imagination. ‘What happens if computers reach and exceed human capacities to write computer programs?’ Price and Tallinn asked. ‘The moment that machines are able to develop even more intelligent machines would result in an “intelligence explosion”.’ (The man who first realised this, Jack Good, who worked with Alan Turing at Bletchley Park, suggested that the creation of a machine of such sophistication would be ‘our last invention’, as ever-smarter robots left humanity far behind.)
The headlines were dramatic: killer robots? cambridge brains to assess ai risk. It was all wonderful publicity – unless, of course, you happened to be leading a scientific project of unprecedented ambition, aiming to develop supercomputers of hitherto unseen power to model the entire human brain in all its intelligent, emotional complexity.
go read.. Mind of its Own
As artificial intelligence continues to evolve in all sorts of freakish ways, people are coming up with odd ideas to entertain themselves before these robots inevitably become self-aware and destroy us all (thanks Terminator movies for making us forever neurotic about our future!). The website Cleverbot is a somewhat confusing online artificial intelligence that allows you to ask it questions and various other things. One guy thought it’d be fun to use Cleverbot to help him create a short film, inserting its answers into the script as he went along.
This is the final outcome, entitled Do You Love Me? It was directed by Chris R. Wilson, who set up the short film by noting, “What follows is a movie written by a machine. I tried to talk to Cleverbot just like I would with a human writing partner. I set up scenarios and Cleverbot provided all of the dialogue content for the scene.” (via Watch a Short Film Cowritten by a Robot | Movie News | Movies.com)
When learning new stories, the software actually perceives the intersections with other tales in its memory. When it can find connections, Xapagy uses the previous experience to predict what will occur in the rest of the story. In this way, the computer is adding new material to the story based upon predictions and memories. This is the closest they have come to instilling creativity in AI.
When Xapagy is confronted with missing words, it fills them in with its own language based upon what makes grammatical and contextual sense. Researchers in AI see this as a tremendous advance, and with enough stories in its memory, they believe Xapagy will be able to generate unique stories of its own.
An interesting and worthwhile read:
..I think assigning values like “good” and “bad” to the various possible outcomes of an evolutionary process makes no sense. Evolution happens and we don’t have control over it. Whatever rules some well-wishers might put into place to prevent certain outcomes, others will find ways to work around them. It’s what Yale computer scientist David Gelernter calls “the Orwell Law of the Future,” and it goes like this: Any new technology that can be tried will be.
We are on a path, and there is no stopping it. This is neither good nor bad, it just is. Was it bad when single-celled organisms evolved into more complex organisms and then got eaten by them? I suppose the single-celled organisms weren’t psyched about it. But without that process, we humans wouldn’t be here. And if now it is our turn to be erased by evolution, so what? From the perspective of the universe, who cares if humans cease to exist?
The great irony in all this is that we can’t stop pushing forward with dangerous technologies (AI, bioengineering) because evolution has hard-wired our brains in such a way that we cannot resist pushing forward, even if the consequence of this ever-upward march of evolution is that we end up rendering ourselves extinct. We humans like to believe that we among all living creatures are special and unique. And we are, if only because we are the first species that will knowingly create something superior to ourselves. We will engineer our own replacements. Which when you think about it is both brilliant and phenomenally stupid at the same time. In other words, perfectly human.