Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Data Storage Robotics Software Entertainment News Hardware Science Technology

Google Unveils 'Gigapixel' Camera To Preserve and Archive Art (thestack.com) 71

An anonymous reader writes: The Google Institute has developed an ultra-high resolution gigapixel Art Camera which can automatically recompose images into single works of extraordinary detail. The first thousand images are released today, and include works by Rembrandt and Van Gogh. A gigapixel contains over one billion pixels, providing a level of detail unavailable even to the naked eye. The Art Camera has increased the number of available gigapixel art images from 200 to 1000 since 2011. The Art Camera consists of a robot camera that automatically takes hundreds of high resolution close-up photos of the details of an image, using laser and sonar technology to ensure that each image is in focus. Software is then used to take the hundreds of individual close-up pictures and combine them into one whole image. With this technology, one can view photos produced by classical artists from a computer or mobile device without needing to travel around the world to do so. These digital gigapixel images are intended to be available for viewing and studying for years. In the future, we may see Google use machine-learning algorithms to analyze influential classical painters and create new masterpieces.
This discussion has been archived. No new comments can be posted.

Google Unveils 'Gigapixel' Camera To Preserve and Archive Art

Comments Filter:
  • Only people intending to pirate these works would need that kind of resolution. I expect Getty to file suit by Monday.

    • by Anonymous Coward

      https://en.wikipedia.org/wiki/Bridgeman_Art_Library_v._Corel_Corp.

    • Only people intending to pirate these works would need that kind of resolution. I expect Getty to file suit by Monday.

      Great Artists of today study the details of the brushwork of past masters in excruciating detail.

      This is not for painting clones of famous pieces—That can already be done.

      This is for world-class artists top study, up-way-close and in-detail, the many layers of paint, or whatever, used to create classic works.

      You don't think Reubens got that realistic, translucent-looking skin by just swabbing on one layer of paint, do you?

      • So you agree - modern artists blatantly stealing the methods and means of other pioneering researchers in the visual arts who are likely not to receive a single cent of remuneration for their discoveries? I'll bet as a result of this rampant piracy and IP theft, not a single one of the old masters will have the money - or will - to create any more great works. And it will be the fault of the pirate corporation Google, leading and encouraging mass - illegal - appropriation of the IP of others for personal, p

    • Ahh yes.. they will also use the highest level of jpeg compression possible to get the file size down to about 100K and since it looks good on their phone they think its super then upload it calling it the most highest definition picture ever...

  • What is the deal with Slashdot and machine learning? Real AI ain't gonna happen no matter how much you want it.
    • Sure, as long as you can always redefine "Real AI" to mean whatever we haven't yet done.

  • In the future, we may see Google use machine-learning algorithms to analyze influential classical painters and create new masterpieces.

    Nope, nope, and hell, no. At best they might create 'in the style of such-and-such classical artist', but they won't be 'masterpieces'. Google's hubris, arrogance, and lack of taste, apparently, know no bounds. AI fanbois can bite me.

    • Since not a single [slashdot.org] one [slashdot.org] of [slashdot.org] you [slashdot.org] are anything more than Anonymous Cowards, I guess I need to comment on my own comment.

      The reason you don't 'get' art, is you're not emotionally available to it. Art is one of the most 'human' things that humans have ever invented, because the whole point of art, is to evoke an emotional response within the person viewing it. That being said, no two people are going to have exactly, precisely the same emotional response to a particular piece of art. Then there's people who Just
  • It is not a "real" gigapixel camera...
    (today's largest are about 250Mp)

    • It's a camera, and it produces gigapixel images of artworks. No, you can't take it out and take gigapixel snapshots, but no one claimed otherwise.

      • But it's no more a gigapixel camera than an iPhone. Well, a high end Android phone at least - iPhones don't have laser based focus detection.

        • If your phone had a lens capable of optically zooming right into a tiny part of the artwork, and a motorised mount that could move it all over the whole work, and software that could match and stitch them all into a seamless larger image, sure.

  • A gigapixel is exactly one billion pixels, not "made of over one billion pixels". We should not let marketroids get away with this sort of crap.
    • It's pretty common usage. A 5 megapixel camera belongs to the set of "megapixel" cameras.

      Anyway, the sensor in this thing has less than a billion pixels, so it is a bit of a lie, but for different reasons.

    • by cdrudge ( 68377 )

      But is that a base-2 billion or a base-10 billion?

  • Stitching artifacts (Score:4, Interesting)

    by ortholattice ( 175065 ) on Tuesday May 17, 2016 @08:56PM (#52131825)

    While the overall result is impressive, the "stitching" isn't perfect. On most pictures it's hard to tell since the brushstrokes have lower resolution than the photography. But on one picture in particular, called "O Livro (os Cem)" by Jac Leirner (1987), the stitching irregularities are easy to find. Type "O Livro" in the search box to find this image.

    Essentially this picture is a giant canvas of words in Portuguese. (I speak Portuguese, and it starts off as a bunch of rambling thoughts on money and love, degenerating into what to me makes no sense).

    Anyway, to pick an easy to locate spot where stitching apparently took place, find the line about 2/3 down that consists of a giant hexadecimal number (what the hell is that, anyway?). The line starts "D21D22C23..." Blow it up to maximum resolution. The first and second D, and the second 2, have alignment artifacts, and the lower portion of this starting string is slightly blurrier that the top portion. This even gives some insight into the algorithm, where you can see that it's desperately trying to align the top and bottom portions, even distorting or shifting some in-between parts to achieve the result.

    • Studying the "hexadecimal" number some more, it doesn't have hex digits E and F. Instead, it seems that the artist took numbers from 1 to 99 and followed each of them with a letter from A to D, like the answers in a multiple-choice quiz. Then he took this long list and cut it into sections, some with the numbers increasing and some decreasing, and re-concatenated them. Beyond that, your guess is as good as mine.
    • Good catch, that particular work of art is extremely good to showcase the problems with their current stitching program. And, although examples of extremely egregious stitching error like the one you cited (on that weird number) are somewhat rare, more subtle errors can be found all around the piece. A quick look at the first few lines tell me that almost half of the words contain a subtle stitching error. For example here is a transcription of the beginning of the first line of text where I highlighted the

  • Many companies have been stitching photos together to make a single larger high resolution photo for a long time and by the looks of it many of them do a better job as well.
  • Bets for how long it is until this technology is used for porn, anyone?
  • by necro81 ( 917438 ) on Wednesday May 18, 2016 @07:37AM (#52133913) Journal
    This brings back into my mind the photo-stitching work done by the Chudnovsky brothers about 15 years ago. Photo-stitching large mosaics has been around for a long time, but the work by these two mathematicians on the Unicorn Hunt [wikipedia.org] tapestries rises to a much higher level.

    The tapestries has been hanging for a very long time. During a restoration they were taken down, soaked clean, and photographed on both sides. (The back side, being against the wall and with a fabric backing on it, had much more vivid color.) But the resulting images were completely un-stitchable by conventional techniques - nothing lined up! The tapestry, being a textile, had relaxed and subtly distorted by being laid horizontal and cleaned. The tapestry was not a static image, but rather a dynamic, breathing object. The Chudnovskys applied serious math and computing power to subtly distort each image in the mosaic, cross-referencing the front and back sides, in order to get the threads to line up.

    TL;DR. See this article [newyorker.com] for more details.

    This Google camera, I'm sure, has very sophisticated stitching algorithms. But in the end, it is probably assuming that it is capturing images of a static object. I wonder how it would handle a similar challenge.
  • "a level of detail unavailable even to the naked eye."

    Even? You make that sound as if the 'naked eye' was the highest possible resolution one could get before.

  • Slightly off topic question for the Slashdot crowd:

    With the rise of large megapixel cameras and seemingly every photo editing program now including some form of panorama stitching function creating a gigapixel image is now getting trivial. How do you display such images? How do you present them on the PC? How do you present them online?

    I've had some success with using the Google Maps API and an open source tiling solution. However that has a serious problem in that it only works with JPEGs which have limite

  • "With this technology, one can view photos produced by classical artists from a computer or mobile device without needing to travel around the world to do so" Er...
  • Okay, really curious on this one. I've looked at a few of these now at fully zoomed in detail. WHY did they use a camera with cheaper glass to do such a high detail scan of these works of art? Zoomed in at 100%, the images are quite soft, similar to a budget or mid-teir DSLR lens. Basically, give a DSLR a few motors to move it around the painting, and it could do the same exact thing, only with sharper detail because of the higher quality glass available for them. Considering that their intent was visual im

  • "one can view photos produced by classical artists from a computer or mobile device without needing to travel around the world to do so." On what monitor do you expect to see a perfect representation of the original colors? Certainly not your mobile device...

No spitting on the Bus! Thank you, The Mgt.

Working...