Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Media Technology

Using Photographs To Enhance Videos 102

seussman71 writes with a link to some very interesting research out of the University of Washington that employs "a method of using high quality photographs to enhance a video taken of the same subject. The project page gives a good overview of what they are doing and the video on the page gives some really nice examples of how their technology works. Hopefully someone can take the technology and run with it, but one thing's for sure: this could make amateur video-making look even better than it does now." And if adding mustaches would improve your opinion of the people in amateur videos, check out the unwrap-mosaics technique from Microsoft Research.
This discussion has been archived. No new comments can be posted.

Using Photographs To Enhance Videos

Comments Filter:
  • by Swizec ( 978239 ) on Thursday August 14, 2008 @06:51PM (#24607737) Homepage
    That's because they're all renders! None of it is real.

    Pics or it didn't happen. Or in this case, apps or it happened only in photoshop/whatever.
  • by Hays ( 409837 ) on Thursday August 14, 2008 @06:55PM (#24607781)

    The publication is supposed to contain enough information to recreate the results.

    Question 4 on the SIGGRAPH review form -
    "4. Could the work be reproduced by one or more skilled graduate students? Are all important algorithmic or system details discussed adequately? Are the limitations and drawbacks of the work clear?"

    If you or a company wants it bad enough, the information is there, unless the review process failed (which does happen).

    This wasn't a SIGGRAPH paper but the ability to reproduce results is none-the-less a standard prerequisite for academic publication.

    It's certainly not as convenient as releasing source code, but that's sometimes a big challenge for an academic researcher because the last thing they want is to have to support buggy, poorly documented research code for random people on the internet.

  • Fractal compression (Score:1, Informative)

    by IdeaMan ( 216340 ) on Thursday August 14, 2008 @06:56PM (#24607803) Homepage Journal

    Combine this with fractal compression [wikipedia.org] and we could store all the videos we've ever seen on one hard disk.

  • That would greatly lower the cost of doing special effects, if you didn't have to do them frame by frame.

  • Re:A better use? (Score:2, Informative)

    by sirkha ( 1015441 ) on Thursday August 14, 2008 @07:36PM (#24608281)

    You see it on TV all the time, CCTV footage of robberies and the like, couldn't this technology be used to effectively map out a 3D image of the purpetrator? I know it wont be perfect and most CCTV is probably too low quality to be used, but it would certainly be pretty cool (and useful) to have a vaugely accurate 3D model of the guy, giving you height, build, etc. and with the help of supplementary images, a really easy way to adjust it's appearance.

    Yes, like, you could adjust the appearance to look exactly like someone else! Not saying that one would or should do this, but now that they can, they probably will.

  • by samkass ( 174571 ) on Thursday August 14, 2008 @07:40PM (#24608331) Homepage Journal

    Takeo Kanade's lab [cmu.edu] at Carnegie Mellon's Robotics Institute did this in the mid 90's [cmu.edu]...

  • by mo ( 2873 ) on Thursday August 14, 2008 @07:50PM (#24608435)

    like the way they "stereoscopically" create a depth-map from a _single_ still photograph

    TFV said they were using video frames to do stereoscopic depth-mapping. Since the source footage changed perspective, they can build a depth map based on the relative shift of each object in the video, and then project the high-quality photograph on top of the derived 3D structure

  • by shidarin'ou ( 762483 ) on Thursday August 14, 2008 @10:10PM (#24609809) Homepage

    This is a 3d track of the shot (which generates a point cloud of 3d points, which can then be used to generate an automatic 3d model of the scene). They then project (a method of texturing that paints a model based on points of projection.. what happens when you stand in front of a projector- you get projected onto) the still photos onto the 3d model, recreating all aspects of the texture and geometry, but instead of SD resolution, you now have gigapixel resolution built into the model.

    The reason it looks like a cheap video game is exactly that, they're trying to prove how sharp it is, so instead of anything being anti-aliased etc it's all crisp- which doesn't look like real life.

    Solution: get a better video camera, learn how to expose your shots properly.

    Oh? And the tree thing? same thing, except instead of projecting the texture on, you just place the texture in the 3d scene where the tree is, and render- it's even easier.

    Solution: Don't film a beat up tree. Don't film flowers with a giant sign in the middle of them.

    This wasn't at SIGGRAPH this week. As a paper or as a poster- of which there are PLENTY of student posters.

    The solution is NOT to fix it in post. The solution is to spend 5 minutes, think it through, and fix it while you're filming.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...