Forgot your password?
typodupeerror

Slashdot is powered by your submissions, so send in your scoop

AI

Upgrading the Turing Test: Lovelace 2.0 54

Posted by Soulskill
from the just-make-sure-to-skip-version-9.0 dept.
mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF). He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."
AI

Google Announces Image Recognition Advance 29

Posted by timothy
from the what-does-a-grue-look-like? dept.
Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."
United States

US Intelligence Unit Launches $50k Speech Recognition Competition 62

Posted by samzenpus
from the unseen-mechanized-ear dept.
coondoggie writes The $50,000 challenge comes from researchers at the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence. The competition, known as Automatic Speech recognition in Reverberant Environments (ASpIRE), hopes to get the industry, universities or other researchers to build automatic speech recognition technology that can handle a variety of acoustic environments and recording scenarios on natural conversational speech.
Robotics

Robots Put To Work On E-Waste 39

Posted by Soulskill
from the robots-disassembling-robots dept.
aesoteric writes: Australian researchers have programmed industrial robots to tackle the vast array of e-waste thrown out every year. The research shows robots can learn and memorize how various electronic products — such as LCD screens — are designed, enabling those products to be disassembled for recycling faster and faster. The end goal is less than five minutes to dismantle a product.
AI

Magic Tricks Created Using Artificial Intelligence For the First Time 77

Posted by samzenpus
from the pick-a-circuit-any-circuit dept.
An anonymous reader writes Researchers working on artificial intelligence at Queen Mary University of London have taught a computer to create magic tricks. The researchers gave a computer program the outline of how a magic jigsaw puzzle and a mind reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.
AI

A Worm's Mind In a Lego Body 200

Posted by timothy
from the with-very-few-exceptions-is-not-a-worm dept.
mikejuk writes The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped, and one of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program. The neurons communicate by sending UDP packets across the network. The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the 'nose' then UDP packets are sent to the sensory neurons in the network. The motor neurons are wired up to the left and right motors of the robot. It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward. The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge. Is the robot a C. elegans in a different body or is it something quite new? Is it alive? These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine. The important question is does it scale?
AI

Does Watson Have the Answer To Big Blue's Uncertain Future? 67

Posted by Soulskill
from the i'm-sorry-dave,-i-need-those-TPS-reports-right-now dept.
HughPickens.com writes: IBM has recently delivered a string of disappointing quarters, and announced recently that it would take a multibillion-dollar hit to offload its struggling chip business. But Will Knight writes at MIT Technology Review that Watson may have the answer to IBM's uncertain future. IBM's vast research department was recently reorganized to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data. "We're betting billions of dollars, and a third of this division now is working on it," says John Kelly, director of IBM Research, said of cognitive computing, a term the company uses to refer to artificial intelligence techniques related to Watson. The hope is that the Watson Business Group, a division aimed making its Jeopardy!-winning cognitive computing application more of a commercial success, will be able to answer more complicated questions in all sorts of industries, including health care, financial investment, and oil discovery; and that it will help IBM build a lucrative new computer-driven consulting business.

But Watson is still a work in progress. Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. "It's not taking off as quickly as they would like," says Robert Austin. "This is one of those areas where turning demos into real business value depends on the devils in the details. I think there's a bold new world coming, but not as fast as some people think." IBM needs software developers to embrace its vision and build services and apps that use its cognitive computing technology. In May of this year it announced that seven universities would offer computer science classes in cognitive computing and last month IBM revealed a list of partners that have developed applications by tapping into application programming interfaces that access versions of Watson running in the cloud. Big Blue said it will invest $1 billion into the Watson division including $100 million to fund startups developing cognitive apps. "I very much admire the end goal," says Boris Katz, adding that business pressures could encourage IBM's researchers to move more quickly than they would like. "If the management is patient, they will really go far."
Transportation

What Will It Take To Make Automated Vehicles Legal In the US? 320

Posted by samzenpus
from the johnny-cab dept.
ashshy writes Tesla, Google, and many other companies are working on self-driving cars. When these autopilot systems become perfected and ubiquitous, the roads should be safer by orders of magnitude. So why doesn't Tesla CEO Elon Musk expect to reach that milestone until 2013 or so? Because the legal framework that supports American road rules is incredibly complex, and actually handled on a state-by-state basis. The Motley Fool explains which authorities Musk and his allies will have to convince before autopilot cars can hit the mainstream, and why the process will take another decade.
AI

Machine Learning Expert Michael Jordan On the Delusions of Big Data 145

Posted by samzenpus
from the listen-up dept.
First time accepted submitter agent elevator writes In a wide-ranging interview at IEEE Spectrum, Michael I. Jordan skewers a bunch of sacred cows, basically saying that: The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool's errand; and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
Google

Will the Google Car Turn Out To Be the Apple Newton of Automobiles? 287

Posted by samzenpus
from the flop-or-not dept.
An anonymous reader writes The better question may be whether it will ever be ready for the road at all? The car has fewer capabilities than most people seem to be aware of. The notion that it will be widely available any time soon is a stretch. From the article: "Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of 'gotcha.' But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. 'While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high,' Leonard says. The list of these 'rare' events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49)."
Graphics

Ubisoft Claims CPU Specs a Limiting Factor In Assassin's Creed Unity On Consoles 338

Posted by timothy
from the bottlenecks-shift dept.
MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.
AI

Outsourced Tech Jobs Are Increasingly Being Automated 236

Posted by timothy
from the ban-farm-equipment dept.
Jason Koebler writes Yahoo announced [Tuesday] it would be laying off at least 400 workers in its Indian office, and back in February, IBM cut roughly 2,000 jobs there. Meanwhile, tech companies are beginning to see that many of the jobs it has outsourced can be automated, instead. Labor in India and China is still cheaper than it is in the United States, but it's not the obvious economic move that it was just a few years ago: "The labor costs are becoming significant enough in China and India that there are very real discussions about automating jobs there now," Mark Muro, an economist at Brookings, said. "Companies are seeing that automated replacements are getting to be 'good enough.'"
AI

Michigan Builds Driverless Town For Testing Autonomous Cars 86

Posted by timothy
from the stepford-michigan dept.
HughPickens.com writes Highway driving, which is less complex than city driving, has proved easy enough for self-driving cars, but busy downtown streets—where cars and pedestrians jockey for space and behave in confusing and surprising ways—are more problematic. Now Will Knight reports that Michigan's Department of Transportation and 13 companies involved with developing automated driving technology are constructing a 30-acre, $6.5 million driverless town near Ann Arbor to test self-driving cars in an urban environment. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms and mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems. "I think it's a great idea," says John Leonard, a professor at MIT who led the development of a self-driving vehicle for a challenge run by DARPA in 2007. "It is important for us to try to collect statistically meaningful data about the performance of self-driving cars. Repeated operations—even in a small-scale environment—can yield valuable data sets for testing and evaluating new algorithms." The testing facility is part of broader work by the University of Michigan's Mobility Transformation Facility that will include putting up to 20,000 vehicles on southeastern Michigan roads. By 2021, Ann Arbor could become the first American city with a shared fleet of networked, driverless vehicles. "Ann Arbor will be seen as the leader in 21st century mobility," says Peter Sweatman, director of the U-M Transportation Research Institute. "We want to demonstrate fully driverless vehicles operating within the whole infrastructure of the city within an eight-year timeline and to show that these can be safe, effective and commercially successful."
AI

One In Three Jobs Will Be Taken By Software Or Robots By 2025, Says Gartner 405

Posted by Soulskill
from the they-took-our-jobs! dept.
dcblogs writes: "Gartner predicts one in three jobs will be converted to software, robots and smart machines by 2025," said Peter Sondergaard, Gartner's research director at its big Orlando conference. "New digital businesses require less labor; machines will make sense of data faster than humans can," he said. Smart machines are an emerging "super class" of technologies that perform a wide variety of work, both the physical and the intellectual kind. Machines, for instance, have been grading multiple choice test for years, but now they are grading essays and unstructured text. This cognitive capability in software will extend to other areas, including financial analysis, medical diagnostics and data analytic jobs of all sorts, says Gartner. "Knowledge work will be automated."
The Military

US Navy Develops Robot Boat Swarm To Overwhelm Enemies 142

Posted by samzenpus
from the angry-bees dept.
HughPickens.com writes "Jeremy Hsu reports that the U.S. Navy has been testing a large-scale swarm of autonomous boats designed to overwhelm enemies. In the test, a large ship that the Navy sometimes calls a high-value unit, HVU, is making its way down the river's thalweg, escorted by 13 small guard boats. Between them, they carry a variety of payloads, loud speakers and flashing lights, a .50-caliber machine gun and a microwave direct energy weapon or heat ray. Detecting the enemy vessel with radar and infrared sensors, they perform a series of maneuvers to encircle the craft, coming close enough to the boat to engage it and near enough to one another to seal off any potential escape or access to the ship they are guarding. They blast warnings via loudspeaker and flash their lights. The HVU is now free to safely move away.

Rear Adm. Matthew Klunder, chief of the Office of Naval Research, points out that a maneuver that required 40 people had just dropped down to just one. "Think about it as replicating the functions that a human boat pilot would do. We've taken that capability and extended it to multiple [unmanned surface vehicles] operating together within that, we've designed team behaviors," says Robert Brizzolara. The timing of the briefing happens to coincide with the 14-year anniversary of the bombing of the USS Cole off the coast of Yemen that killed 17 sailors. It's an anniversary that Klunder observes with a unique sense of responsibility. "If we had this capability there on that day. We could have saved that ship. I never want to see the USS Cole happen again."
AI

Artificial General Intelligence That Plays Video Games: How Did DeepMind Do It? 93

Posted by timothy
from the they-can't-let-you-do-that-dave dept.
First time accepted submitter Hallie Siegel writes Last December, an article named 'Playing Atari with Deep Reinforcement Learning' was uploaded to arXiv by employees of a small AI company called DeepMind. Two months later Google bought DeepMind for 500 million euros, and this article is almost the only thing we know about the company. A research team from the Computational Neuroscience Group at University of Tartu's Institute of Computer Science is trying to replicate DeepMind's work and describe its inner workings.
AI

CIA Tested Primitive Chatbots For Interrogation In the 1980s 65

Posted by Soulskill
from the after-their-human-interrogators-couldn't-pass-the-turing-test dept.
New submitter ted_pikul writes: Newly declassified documents reveal that, 30 years ago, the CIA pitted one of its own agents against an artificial intelligence interrogator in an attempt to see whether or not the technology would be useful. The documents, written in 1983, describe a series of experimental tests (PDF) in which the CIA repeatedly interrogated its own agent using a primitive AI called Analiza. The intelligence on display in the transcript is clearly undeveloped, and seems to contain a mixed bag of predetermined threats made to goad interrogation subjects into spilling their secrets as well as open-ended lines of questioning.
AI

New Long-Range RFID Technology Helps Robots Find Household Objects 38

Posted by samzenpus
from the follow-the-signal dept.
HizookRobotics writes Georgia Tech researchers announced a new way robots can "sense" their surroundings through the use of small ultra-high frequency radio-frequency identification (UHF RFID) tags. Inexpensive self-adhesive tags can be stuck on objects, allowing an RFID-equipped robot to search a room for the correct tag's signal, even when the object is hidden out of sight. Once the tag is detected, the robot knows the object it's trying to find isn't far away. The researchers' methods, summarized over at IEEE: "The robot goes to the spot where it got the hottest signal from the tag it was looking for, zeroing in on it based on the signal strength that its shoulder antennas are picking up: if the right antenna is getting a stronger signal, the robot yaws right, and vice versa."
AI

The Challenges and Threats of Automated Lip Reading 120

Posted by Soulskill
from the surgical-masks-become-high-fashion-in-2018 dept.
An anonymous reader writes: Speech recognition has gotten pretty good over the past several years. it's reliable enough to be ubiquitous in our mobile devices. But now we have an interesting, related dilemma: should we develop algorithms that can lip read? It's a more challenging problem, to be sure. Sounds can be translated directly into words, but deriving meaning out of the movement of a person's face is much more complex. "During speech, the mouth forms between 10 and 14 different shapes, known as visemes. By contrast, speech contains around 50 individual sounds known as phonemes. So a single viseme can represent several different phonemes. And therein lies the problem. A sequence of visemes cannot usually be associated with a unique word or sequence of words. Instead, a sequence of visemes can have several different solutions." Beyond the computational aspect, we also need to decide, as a society, if this is a technology that should exist. The privacy implications extend beyond that of simple voice recognition.
Google

The Documents From Google's First DMV Test In Nevada 194

Posted by samzenpus
from the showing-their-work dept.
An anonymous reader writes "IEEE Spectrum contributor Mark Harris obtained a copy of the DMV test Google's autonomous car passed in Nevada in 2012 and associated documents. What has not been revealed until now, is that Google chose the test route; that it set limits on the road and weather conditions that the vehicle could encounter; and that its engineers had to take control of the car twice during the drive.

The reason why worry kills more people than work is that more people worry than work.

Working...