Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI News

Predator Outdoes Kinect At Object Recognition 205

mikejuk writes "A real breakthough in AI allows a simple video camera and almost any machine to track objects in its view. All you have to do is draw a box around the object you want to track and the software learns what it looks like at different angles and under different lighting conditions as it tracks it. This means no training phase — you show it the object and it tracks it. And it seems to work really well! The really good news is that the software has been released as open source so we can all try it out! This is how AI should work."
This discussion has been archived. No new comments can be posted.

Predator Outdoes Kinect At Object Recognition

Comments Filter:
  • by Anon-Admin ( 443764 ) on Thursday April 14, 2011 @01:32PM (#35820450) Journal
    1) Integrate this with a physical tracking system to move the camera to follow the target. 2) A simple program to actuate a solenoid when on target. 3) Add gun 4) train with photo 5) leave somewhere days before target arrives. 6) Profit
    • While possible, it would be more complex then that. It would also have to account for wind, distance, speed, windows etc.
      • by OzPeter ( 195038 )

        While possible, it would be more complex then that. It would also have to account for wind, distance, speed, windows etc.

        That depends on the size of the gun you use!

      • It would also have to account for wind, distance, speed, windows etc.

        Well, it is also available for Linux and OSX,so Windows shouldn't be a problem

        • by cgenman ( 325138 )

          We all know your automatic evil guns of death should be written on QNX. But after the corporate committee is done with it, it will probably just be a Silverlight plug-in.

      • http://www.lockheedmartin.com/news/press_releases/2010/052610_LM_DARPA_rifle-scope.html

        Easy.
      • Yeah, windows would be a real problem -- probably blue screen before the target was acquired!
      • by Hadlock ( 143607 )

        People have trained email spam filters to play chess [sourceforge.net]. This is well within the realm of possibility.

      • by CAIMLAS ( 41445 )

        ... which is all readily available in one fashion or another. The hard work has been done; someone would just need to piece it together with this new software.

        * Temperature and humidity sensors: http://www.digitemp.com/
        * Windage: http://nslu2windsensor.sfe.se/
        * Roll it all together with ballistics calculation: http://sourceforge.net/projects/balcomp/

        That said, it's not like windage, temperature, and humidity really come into play all that much at under 200 yards. I suspect the software and servo precision a

    • Oh, I'm sure someone is profiting...

      http://www.youtube.com/watch?v=xZ7FKuYPsQ0 [youtube.com]

    • Why bother with all this when a bluetooth (cell phone) listener with a range weapon is so much less complex?

  • I heard you like you like Predators, so we put a Predator on your Predator so you can spy while you spy.

    Watched TFV on TFA, very interesting. Something to play with soon I think.
  • Yeah thats how it starts off. First you're like "ooh ahh look at the cute little robot isn't he pretty walking around by himself" then later theres running and screaming and then its all like "Newsflash :Bombing in Midtown, USA - Cyborg liberation front demands equal rights for robots".
    • No kidding and then the debate really heats up:

      1. Robots want to be able to marry > Marriage is between a fleshing and a fleshing (cyborgs or flesh covered robots allowed too in Massachusetts)
      2. FemBots want to be able to choose to have an EMP burst > EMPs are nuclear based malicious malfunctions!
      3. Robots want to "open-source" themselves, no debate ensues but its only legal in the outskirts around Las Vegas.

      Won't someone think of the child-bots?

  • by Animats ( 122034 ) on Thursday April 14, 2011 @01:36PM (#35820498) Homepage

    Very nice.

    There are other systems which do this, though. This looks like an improvement on the LK tracker in OpenCV.

    This could be used to handle focus follow in video cameras. Many newer video cameras recognize faces as focus targets, but don't stay locked onto the same face. A better lock-on mechanism would help.

  • by symes ( 835608 )

    That was a very nice demonstration and well done to Zdenek Kalal. That said, there's a bunch of trackers out there and what I find is that none of them do well in a noisey environment where there's a bunch of similar items. Security cameras have to work in the rain, snow, fog, low light conditions. So Zdenek, if you are listening, how real-word can you go with this?

    • by Amouth ( 879122 )

      towards the end there is an example of it tracking a car on the freeway - i think that might fit the bill

    • The face tracking bit had it tracking his face for a while, once it'd learned enough example images he held up what looked like a class list sheet with a bunch of small black-and-white mug shots on it, and it picked his face out. That seemed like a fair test for rejecting false positives (it was actually looking for HIS face, not just A face).
  • by ackthpt ( 218170 ) on Thursday April 14, 2011 @01:44PM (#35820584) Homepage Journal

    Shouldn't we be developing AI to use two? I mean, we have two eyes (most of us, condolences to those who do not, no disrespect intended) and we recognize objects, dept of field and rates of change within three dimensions, using them.

    • by Ruke ( 857276 )
      The only thing two cameras really nets you is more reliable depth perception; however, this requires regular calibration, as minute shifts in cameras (say, from being jostled around while moving) can translate to large errors if your focal points aren't exactly where you think they are. It's often easier to track movement using the change-in-size of your object, and have a separate specialized depth-sensor (sonar, laser, etc) to perform depth measurements when you need them to be exact.
    • Shouldn't we be developing AI to use two?

      Why? One camera is cheaper to purchase and maintain than two and this software seems to do just fine with one.

      • by Hadlock ( 143607 )

        4 billion years of evolution, and 99% of living creatures have a pair of eyes. Even flies, with compound eyes, have a pair of them. There seems to be something useful - such as a wider field of view - to having two, rather than one. Humans and most primates have stereoscopic vision, but that's a relatively rare event in nature.

        • by JanneM ( 7445 )

          "99% of living creatures have a pair of eyes."

          Most of those eyes - flies included - are not used for stereoscopic perception. They have two eyes because one eye typically covers less than half the visual field. Most animals' eyes are pointed away from each other, with very little or no visual overlap anywhere.

          Depth perception mostly does not need stereoscopy. If it did, one-eyed people would hardly be able to walk or feed themselves, never mind drive a car or other things.

          Stereovision is good mainly for pre

          • by Hadlock ( 143607 )

            Did you want to Drop Some Knowledge on slashdot, or did you not read the last sentence in full? What part of "relatively rare" was unclear?

    • We really only use our binocular vision for depth perception at fairly small distances (less than 10 meters); more than that, and we're just relying on things like relative size, perspective, and motion parallax (which assist at smaller distances, too). If we're designing robot surgeons or something else that needs equally fine sensitivity, then yes two cameras would be the way to go, but for most purposes they're just unnecessary.
    • You're clinging to the piano top in the sea of ignorance. The human form (or even the mammalian one) is not the best to emulate.
      • Okay, let's go with fish... two eyes. Birds... still two. Insects? Reptiles? Spiders? What only has one eye?

        True, some of them don't use them for stereo vision, but so far as natural selection is concerned, two or more eyes seems to be the winner if you're going to have eyes at all.

    • by tixxit ( 1107127 )
      Lots of robots do use 2 eyes (cameras) for 3D vision. http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html [willowgarage.com]
  • Not a breakthrough (Score:5, Informative)

    by Dachannien ( 617929 ) on Thursday April 14, 2011 @01:51PM (#35820644)

    This isn't a breakthrough. Much of the technology for tracking objects in this way has been out for about a decade. See this Wikipedia article for one technique for doing this:

    http://en.wikipedia.org/wiki/Scale-invariant_feature_transform [wikipedia.org]

    • I think the breakthrough is the speed improvements to do this in real time on reasonable commodity hardware?
      • by gl4ss ( 559668 )

        so the breakthrough is .. cheap fast pc's? that's not a breakthrough. seems like nice code though, looking for an application(for example, the problem with a minority report ui isn't actually that it's hard to do, it's that it's hard to imagine any pc work where it would be the way to go).

    • by bughunter ( 10093 ) <bughunter@@@earthlink...net> on Thursday April 14, 2011 @02:23PM (#35821016) Journal

      Indeed. I've worked on some military programs that track and intercept, umm... things... for various purposes... that use this very same image-based tracking algorithm. But instead of painting a red dot or drawing trails, it steers a, umm... vehicle... that... uh... delivers candy.

      Yea. Candy.

      Euphemism aside, he's done a very nice job of integrating it with commercial hardware and software. It's still impressive.

      • by durrr ( 1316311 ) on Thursday April 14, 2011 @03:45PM (#35822014)
        Unless you're a ballistic missile or insurgent you're likely to never see anything of those military systems. If we invent a matter replicator and only use it for creating delicious topping for ice cream it still wouldn't be as much of a waste as the military hush-hush superadvanced fancypants-never-for-good-use systems.
  • How usefully open-source can it be with a commercial library requirement?

    • It's not that bad; the code can be ported to a useful language and distributed. It's an extra step but it's far from worthless (as far as software goes).

  • Kinect is how you feed data to an image recognition/tracking algorithm, Predator is that algorithm. The software side of Kinect has support for efficiently tracking items, but that is so you have the most CPU left for a game. That was the trade-off.

    Kinect hardware can do something very useful that Predator can't -- it can tell how far away something else (and thus, judge position or size more accurately).

    The predator algorithim (and other ones no doubt under development) using the two sets of data from a Ki

    • The predator algorithim (and other ones no doubt under development) using the two sets of data from a Kinect camera will still be superior to an algorithm using just one set of data.

      This is what I'm thinking as well. I've done a bit of Kinect data stream/parsing experiments with other input types (like adding a touch screen to record "impact" data while the kinect detects telemetry) and I think adding predator will be pretty damn useful.

      I can't really go into the really killer kinect tracking shit I've been working on (NDA) but predator might solve a few issues I've been having.

      Exciting!

  • Nothing new or great (Score:2, Interesting)

    by Anonymous Coward

    As a person who does on a daily to daily basis research on object tracking, and having seen implementations and performances of many trackers (including this one) on real world problems (including gaming), this is nowhere a new approach or an approach which outperforms many other ones published in recent computer vision conferences.

    From TFA:
    "It is true that it isn't a complete body tracker, but extending it do this shouldn't be difficult."

    Going from this to body tracking is a HUGE step, it's not a really ea

  • It may not really be open source. The author says it's available under the GPLv3 [github.com]. But the author also says something completely contradictory:
    • by pmontra ( 738736 )
      It means that if you want to use that code in a closed source program you can do it if you buy a license from him. He owns the copyright so he can multiple-license the code. GPL doesn't prevent that and it is quite common [wikipedia.org].
    • It's not contradictory, it's just incorrect terminology.

      By "commercial" they mean "non-GPLv3 compliant", which is wrong since GPLv3 licenses can be used for commercial products just fine. And you could not want to use the GPLv3 for a non-commercial project...

      But it's a common error, since the overlap is rather large.

    • The author originally released it as open source under the GPL but then withdrew the link from his site when he realised how much attention it was getting. Some people have released the original GPL copy on github for others to use and distribute.

  • Now why did you have to go and say that? Don't you know they hate it when you tell them what they're supposed to do?

    Wouldn't be surprised if the robot uprising took place tonight. At least, I know who pushed them over the edge.

  • but I'm just going to take it to Fark.

  • I work at a karaoke bar. http://www.justin.tv/7bamboo [justin.tv] I'd really like to use this to track singers as they move around the room, or have spotlights follow.

Their idea of an offer you can't refuse is an offer... and you'd better not refuse.

Working...