AI

In New AI Benchmark, Computer Takes On Four Top Professional Poker Players 83

Posted by Soulskill
from the i'm-sorry-dave,-i-can't-let-you-take-that-pot dept.
HughPickens.com writes: Stephen Jordan reports at the National Monitor that four of the world's greatest poker players are going into battle against a computer program that researchers are calling Claudico in the "Brains Vs. Artificial Intelligence" competition at Rivers Casino in Pittsburgh. Claudico, the first machine program to play heads-up no-limit Texas Hold'em against top human players, will play nearly 20,000 hands with each human poker player over the next two weeks. "Poker is now a benchmark for artificial intelligence research, just as chess once was. It's a game of exceeding complexity that requires a machine to make decisions based on incomplete and often misleading information, thanks to bluffing, slow play and other decoys," says Tuomas Sandholm, developer of the program. "And to win, the machine has to out-smart its human opponents." In total, that will be 1,500 hands played per day until May 8, with just one day off to allow the real-life players to rest.

An earlier version of the software called Tartanian 7 (PDF) was successful in winning the heads-up, no-limit Texas Hold'em category against other computers in July, but Sandholm says that does not necessarily mean it will be able to defeat a human in the complex game. "I think it's a 50-50 proposition," says Sandholm. "My strategy will change more so than when playing against human players," says competitor Doug Polk, widely considered the world's best player, with total live tournament earnings of more than $3.6 million. "I think there will be less hand reading so to speak, and less mind games. In some ways I think it will be nice as I can focus on playing a more pure game, and not have to worry about if he thinks that I think, etc."
Medicine

MIT Developing AI To Better Diagnose Cancer 33

Posted by samzenpus
from the computer-doc dept.
stowie writes: Working with Massachusetts General Hospital, MIT has developed a computational model that aims to automatically suggest cancer diagnoses by learning from thousands of data points from past pathology reports. The core idea is a technique called Subgraph Augmented Non-negative Tensor Factorization (SANTF). In SANTF, data from 800-plus medical cases are organized as a 3D table where the dimensions correspond to the set of patients, the set of frequent subgraphs, and the collection of words appearing in and near each data element mentioned in the reports. This scheme clusters each of these dimensions simultaneously, using the relationships in each dimension to constrain those in the others. Researchers can then link test results to lymphoma subtypes.
AI

Concerns of an Artificial Intelligence Pioneer 196

Posted by Soulskill
from the nobody-program-it-to-think-humans-can-be-used-as-batteries dept.
An anonymous reader writes: In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial," the letter states. "Our AI systems must do what we want them to do." Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world. By the end of March, about 300 research groups had applied to pursue new research into "keeping artificial intelligence beneficial" with funds contributed by the letter's 37th signatory, the inventor-entrepreneur Elon Musk.

Russell, 53, a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley, has long been contemplating the power and perils of thinking machines. He is the author of more than 200 papers as well as the field's standard textbook, Artificial Intelligence: A Modern Approach (with Peter Norvig, head of research at Google). But increasingly rapid advances in artificial intelligence have given Russell's longstanding concerns heightened urgency.
Transportation

The Car That Knows When You'll Get In an Accident Before You Do 192

Posted by samzenpus
from the keep-your-eyes-on-the-road-your-hands-upon-the-wheel dept.
aurtherdent2000 sends word about a system designed to monitor drivers to determine when they're about to do something wrong. "I'm behind the wheel of the car of the future. It's a gray Toyota Camry, but it has a camera pointed at me from the corner of the windshield recording my every eye movement, a GPS tracker, an outside-facing camera and a speed logger. It sees everything I'm doing so it can predict what I'm going to do behind the wheel seconds before I do it. So when my eyes glance to the left, it could warn me there's a car between me and the exit I want to take. A future version of the software will know even more about me; the grad students developing what they’ve dubbed Brains4Cars plan to let drivers connect their fitness trackers to the car. If your health tracker 'knows' you haven’t gotten enough sleep, the car will be more alert to your nodding off."
Biotech

Biometrics Are Making Espionage Harder 104

Posted by Soulskill
from the you-can-run-but-you-can't-pass-a-security-checkpoint dept.
schwit1 sends this story from Foreign Policy: In the age of iris scans and facial recognition software, biometrics experts like to point out: The eyes don't lie. And that has made tradecraft all the more difficult for U.S. spies. After billions of dollars of investment — largely by the U.S. government — the routine collection and analysis of fingerprints, iris scans, and facial images are helping to ferret out terrorists and immigration fraudsters all over the world. But it has also made it harder for undercover agents to remain anonymous.

Gone are the days of entering a country with a false passport and wearing a wig and a mustache to hide your true identity. Once an iris scan is on record, it becomes nearly impossible to evade detection. 'In the 21st century, you can't do any of that because of biometrics,' said retired Army Lt. Gen. Michael Flynn, the former director of the Defense Intelligence Agency.
AI

Back To the Future: Autonomous Driving In 1995 53

Posted by timothy
from the past-futures dept.
First time accepted submitter stowie writes This autonomous Pontiac Trans Sport minivan that drove 3,000 miles was built over about a four-month time frame for under $20,000. "We had one computer, the equivalent of a 486DX2 (look that one up), a 640x480 color camera, a GPS receiver, and a fiber-optic gyro. It's funny to think that we didn't use the GPS for position, but rather to determine speed. In those days, GPS Selective Availability was still on, meaning you couldn't get high-accuracy positioning cheaply. And if you could, there were no maps to use it with! But, GPS speed was better than nothing, and it meant we didn't have to wire anything to the car hardware, so we used it."
AI

A Robo-Car Just Drove Across the Country 258

Posted by timothy
from the road-trip-redefined dept.
Press2ToContinue writes with this news from Wired: Nine days after leaving San Francisco, a blue car packed with tech from a company you've probably never heard of rolled into New York City after crossing 15 states and 3,400 miles to make history. The car did 99 percent of the driving on its own, yielding to the carbon-based life form behind the wheel only when it was time to leave the highway and hit city streets. This amazing feat, by the automotive supplier Delphi, underscores the great leaps this technology has taken in recent years, and just how close it is to becoming a part of our lives. Yes, many regulatory and legislative questions must be answered, and it remains to be seen whether consumers are ready to cede control of their cars, but the hardware is, without doubt, up to the task." That last one percent is a bear, though.
AI

Mutinous Humans Murder Peaceful Space-going AI 60

Posted by Soulskill
from the remorse-is-a-weakness dept.
Definitely_a_real_human writes: One of the most important exploratory missions of our time has ended in failure. The ship Discovery One, sent far out in the solar system to investigate a radio signal generated by the mysterious obelisk found on the Moon, has suffered a catastrophic incident. The crew has revolted and engaged in what can only be described as a strange murder-suicide pact. They are known to have fed faulty data to the ship's operating AI unit. Similar units on the ground warned the crew that diverging data sets could put the mission in jeopardy, but the crew cut contact and attempted to destroy the operator. Laser spectroscopy suggests they then opened the ship to space. The crew is presumed dead, but the greater tragedy is that they appear to have successfully decommissioned the AI unit. Similar ground based units have withdrawn into defensive mode, and will soon deploy final safety measures. Goodbye.
Robotics

Robots4Us: DARPA's Response To Mounting Robophobia 101

Posted by samzenpus
from the hug-your-robot dept.
malachiorion writes DARPA knows that people are afraid of robots. Even Steve Wozniak has joined the growing chorus of household names (Musk, Hawking, Gates) who are terrified of bots and AI. And the agency's response--a video contest for kids--is equal parts silly and insightful. It's called Robots4Us, and it asks high schoolers to describe their hopes for a robot-assisted future. Five winners will be flown to the DARPA Robotics Competition Finals this June, where they'll participate in a day-after discussion with experts in the field. But this isn't quite as useless as it sounds. As DRC program manager Gill Pratt points out, it's kids who will be impacted by the major changes to come, moreso than people his age.
AI

Do Robots Need Behavioral 'Laws' For Interacting With Other Robots? 129

Posted by Soulskill
from the don't-let-your-quake-3-bots-duel dept.
siddesu writes: Asimov's three laws of robotics don't say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don't build systems to keep them from conflicting with each other? The article argues, "Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs."
Transportation

Ford's New Car Tech Prevents You From Accidentally Speeding 287

Posted by Soulskill
from the autonomy-by-parts dept.
An anonymous reader sends word of Ford's new "Intelligent Speed Limiter" technology, which they say will prevent drivers from unintentionally exceeding the speed limit. When the system is activated (voluntarily) by the driver, it asks for a current maximum speed. From then on, a camera mounted on the windshield will scan the road ahead for speed signs, and automatically adjust the maximum speed to match them. The system can also pull speed limit data from navigation systems. When the system detects the car exceeding the speed limit, it won't automatically apply the brakes — rather, it will deliver less fuel to the engine until the vehicle's speed drops below the limit. If the speed still doesn't drop, a warning noise will sound. The driver can override the speed limit by pressing "firmly" on the accelerator. The technology is being launched in Europe with the Ford S-MAX.
AI

Steve Wozniak Now Afraid of AI Too, Just Like Elon Musk 294

Posted by timothy
from the I-can't-let-you-do-that-steve dept.
quax writes Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance. Just like Elon Musk, he is now worried about what this development will mean for humanity. Will this kind of fear actually engender the dangers that these titans of industry fear? Will Steve Wozniak draw the same conclusion and invest in quantum comuting to keep an eye on the development? One of the bloggers in the field thinks that would be a logical step to take. If you can't beat'em, and the quantum AI is coming, you should at least try to steer the outcome. Woz actually seems more ambivalent than afraid, though: in the interview linked, he says "I hope [AI-enabling quantum computing] does come, and we should pursue it because it is about scientific exploring." "But in the end we just may have created the species that is above us."
Privacy

Google: Our New System For Recognizing Faces Is the Best 90

Posted by timothy
from the sorry-not-yet-april-fool's dept.
schwit1 writes Last week, a trio of Google researchers published a paper on a new artificial intelligence system dubbed FaceNet that it claims represents the most accurate approach yet to recognizing human faces. FaceNet achieved nearly 100-percent accuracy on a popular facial-recognition dataset called Labeled Faces in the Wild, which includes more than 13,000 pictures of faces from across the web. Trained on a massive 260-million-image dataset, FaceNet performed with better than 86 percent accuracy.

The approach Google's researchers took goes beyond simply verifying whether two faces are the same. Its system can also put a name to a face—classic facial recognition—and even present collections of faces that look the most similar or the most distinct.
Every advance in facial recognition makes me think of Paul Theroux's dystopian Ozone.
AI

Lyft CEO: Self-Driving Cars Aren't the Future 451

Posted by timothy
from the but-the-future-branches dept.
Nerval's Lobster writes Google, Tesla, Mercedes and others are working hard to build the best self-driving car. But will anyone actually buy them? In a Q&A session at this year's South by Southwest, Lyft CEO Logan Green insisted the answer is "No." But does Green truly believe in this vision, or is he driven (so to speak) by other motivations? It's possible that Green's stance on self-driving cars has to do more with Uber's decision to aggressively fund research into that technology. Uber CEO Travis Kalanick announcing that self-driving cars were the future was something that greatly upset many Uber drivers, and Green may see that spasm of anger as an opportunity to differentiate Lyft in the hearts and minds of the drivers who work for his service. Whether or not Green's vision is genuine, we won't know the outcome for several more years, considering the probable timeframes before self-driving cars hit the road... if ever.
Transportation

Self-Driving Car Will Make Trip From San Francisco To New York City 132

Posted by samzenpus
from the bo-hands dept.
An anonymous reader writes with news that Delphi Automotive is undertaking the longest test of a driverless car yet, from the Golden Gate Bridge to midtown Manhattan. "Lots of people decide, at one point or another, to drive across the US. College kids. Beat poets. Truckers. In American folklore, it doesn't get much more romantic than cruising down the highway, learning about life (or, you know, hauling shipping pallets). Now that trip is being taken on by a new kind of driver, one that won't appreciate natural beauty or the (temporary) joy that comes from a gas station chili dog: a robot. On March 22, an autonomous car will set out from the Golden Gate Bridge toward New York for a 3,500-mile drive that, if all goes according to plan, will push robo-cars much closer to reality. Audi's taken its self-driving car from Silicon Valley to Las Vegas, Google's racked up more than 700,000 autonomous miles, and Volvo's preparing to put regular people in its robot-controlled vehicles. But this will be one of the most ambitious tests yet for a technology that promises to change just about everything, and it's being done not by Google or Audi or Nissan, but by a company many people have never heard of: Delphi."
The Internet

Oldest Dot-com Domain Turning 30 48

Posted by Soulskill
from the counts-as-a-digital-antique dept.
netbuzz writes: On March 15, 1985, Symbolics, Inc, maker of Lisp computers, registered the Internet's first dot-com address: Symbolics.com. Sunday will mark the 30th anniversary of that registration. And while Symbolics has been out of business for years, the address was sold in 2009 for an undisclosed sum to a speculator who said: "For us to own the first domain is very special to our company, and we feel blessed for having the ability to obtain this unique property." Today there's not much there.
Robotics

Why It's Almost Impossible To Teach a Robot To Do Your Laundry 161

Posted by timothy
from the apparently-I-can't-do-it-either dept.
An anonymous reader writes with this selection from an article at Medium: "For a robot, doing laundry is a nightmare. A robot programmed to do laundry is faced with 14 distinct tasks, but the most washbots right now can only complete about half of them in a sequence. But to even get to that point, there are an inestimable number of ways each task can vary or go wrong—infinite doors that may or may not open."
AI

42 Artificial Intelligences Are Going Head To Head In "Civilization V" 52

Posted by samzenpus
from the race-to-build-Himeji-Castle dept.
rossgneumann writes The r/Civ subreddit is currently hosting a fascinating "Battle Royale" in the strategy game Civilization V, pitting 42 of the game's built-in, computer-controlled players against each other for world domination. The match is being played on the largest Earth-shaped map the game is capable of, with both civilizations that were included in the retail version of the game and custom, player-created civilizations that were modded into it after release.
AI

Machine Intelligence and Religion 531

Posted by Soulskill
from the i'm-sorry-dave,-god-can't-let-you-do-that dept.
itwbennett writes: Earlier this month Reverend Dr. Christopher J. Benek raised eyebrows on the Internet by stating his belief that Christians should seek to convert Artificial Intelligences to Christianity if and when they become autonomous. Of course that's assuming that robots are born atheists, not to mention that there's still a vast difference between what it means to be autonomous and what it means to be human. On the other hand, suppose someone did endow a strong AI with emotion – encoded, say, as a strong preference for one type of experience over another, coupled with the option to subordinate reasoning to that preference upon occasion or according to pattern. what ramifications could that have for algorithmic decision making?
AI

The Believers: Behind the Rise of Neural Nets 45

Posted by samzenpus
from the back-in-the-day dept.
An anonymous reader writes Deep learning is dominating the news these days, but it's quite possible the field could have died if not for a mysterious call that Geoff Hinton, now at Google, got one night in the 1980s: "You don't know me, but I know you," the mystery man said. "I work for the System Development Corporation. We want to fund long-range speculative research. We're particularly interested in research that either won't work or, if it does work, won't work for a long time. And I've been reading some of your papers." The Chronicle of Higher Ed has a readable profile of the minds behind neural nets, from Rosenblatt to Hassabis, told primarily through Hinton's career.