Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Media Mozilla The Internet

Firefox Plugin Annodex For Searching Audio, Video 129

loser in front of a computer writes "ZDNet Australia reports that 'Australia's CSIRO research organisation has developed a Firefox plugin named Annodex that allows browsing through time-continuous media such as audio and video in the same way that HTML allows browsing through text.' I've just checked Annodex out and it's very cool. The sample video from the Perl conference is way funny too." The catch is, the media to be searched has to be prepped first.
This discussion has been archived. No new comments can be posted.

Firefox Plugin Annodex For Searching Audio, Video

Comments Filter:
  • astonishing (Score:5, Funny)

    by rich42 ( 633659 ) on Tuesday February 15, 2005 @05:53AM (#11675971) Homepage
    the implications for porn surfing are mind numbing.
  • Of course (Score:4, Insightful)

    by shreevatsa ( 845645 ) <shreevatsa DOT slashdot AT gmail DOT com> on Tuesday February 15, 2005 @05:54AM (#11675976)
    The catch is, the media to be searched has to be prepped first.
    Isn't that obvious? It's too much to expect it to be able to search video without knowing what it is.
    • Re:Of course (Score:3, Insightful)

      by jokumuu ( 831894 )
      well, to be revolutionary it would require that capability. As it is now, it is simply a toy to play around with and then forget about.
      • Re:Of course (Score:5, Insightful)

        by bogado ( 25959 ) <bogado.bogado@net> on Tuesday February 15, 2005 @06:19AM (#11676050) Homepage Journal
        You understand that to be able to search you must read the content before, right? Google does read all the pages to index them, this is a preprocessing stage. I don't see why this requirement is a impediment. Sure video processing is time consuming, but downloading videos are also time and bandwidth consuming, so in general searching videos is harder, much harder then text.
        • But, when the time comes that bandwidth overshadows any realtime video bandwidth, video searching and viewing will become as ubiquitous as email and google now.
          • Re:Of course (Score:3, Interesting)

            by bogado ( 25959 )
            Even if you have the largest bandwidth you can imagine, still local indexes are the way to go. I can't imagine any movie search engine that will not pre-process the movie info, to fit the data into an index first. This pre-processing could be made externally to aid the search engine, and keeped in a separate file with the metadata for the movie. (I din't read the article due to the slashdot effect, but I imagine this is something like that).
    • Exactly. However, this means a huge load of work for SOMEBODY or the entire community to watch every video and categorise it. Also, you'll need to make sure people read over submissions - it would be very easy (in a wikipedia sense) to simply put bogus information in.
    • I don't think so. We now have image recognition software and speech input for the computer. Neither are very advanced but both work. With some clever code a program could be written to identify videos and even place them in context. Add in logic and a database to keep track of what's been "learned" and you'd have the beginings of videos.google.com. This is the kind of interesting project I envision Google PhD's working on, on that weekly free day of theirs. If any one does end up doing this I expect it'll
    • Searching anything on the server requires the server to index it beforehand, or parse it in realtime. The client can't actually search the content unless it downloads it. There's little economy in every client downloading all the content in the archive just to search it for a fragment. No matter how much bandwidth and storage (except near-infinite), and much economy in searching at the server, where the data is stored (even in a distributed network, like BitTorrent). And the copyright control, as well as us
  • by jokumuu ( 831894 ) on Tuesday February 15, 2005 @05:55AM (#11675981)
    If the media has to be specially prepared for this to work, I do not see this taking off currently until the search engine can do the prepping fast and simple from the orginal unprepped media.
    • How could a computer possibly work out what media is sports or music videos or anime or tv shows or whatever. We need a new format like WMV (It contains XML type things in it) with XML wrapped inside that can be extracted and read that contains proper Genre and Category and such. I think WMV has this but it's not compulsury and doesn't get used often. If these formats were to take off then I could see this happening.
      • by luvirini ( 753157 ) on Tuesday February 15, 2005 @06:08AM (#11676018)
        Yes indeed that is the core of the problem, in order to search something, the search algoritm has to understand the content to be searched.

        Currently trying to get a computer to understand something in pictures, even less in motion pictures is very inaccurate and extremly prosessor intensive, unless one uses a really small subset(like fingerprint recognition)

      • by Anonymous Coward on Tuesday February 15, 2005 @06:10AM (#11676026)
        How could a computer possibly work out what media is sports or music videos or anime or tv shows or whatever.
        That sounds like a doctorate in the making... I'd anticipate an 80% hit rate in genre classification (at least) within 6 months of research, just given those sorts of categories. It's just image recogition and classification, really, but with a fscking huge dataset (which is a good thing).
        • oh, I am sure google would hire anyone who got this recognition working with low enough prosessor requirement.
        • You have got to be kidding me. Do you know anything about he current state of vision recognition? A picture containing a large red car and a large woman in a red dress are nearly identical to a computer. Facial recognition is easily defeated if the target people just slightly turn their face to the side or down or up.

          Even voice recognition is still pretty bad. Say something with an accent, and you can forget about proper recognition, unless you spend huge effort tweaking the recognizer for each differe
          • That's with external input from the real world, via cameras and mics. TV shows are likely to have well recorded audio, clear well-lit closeups of faces, follow conventions of shot composition etc.

            Then there's other things that can be examined: the number of colours could distinguish anime from film, the amount of audio compression / background noise etc.

            I imagine the best way to do it would be using a neural net approach. Get someone to sit with the computer and play human-categorised content to it al
            • I will believe it when I see it. AI is a field known to be big on promises and little on delivery. Periodically the field of AI gets hyped up, only to let everybody down. The entire field borders on being full of cranks.
              • There are a lot of cranks in AI, especially those trying to pass the Turing test. But I'm speaking specifically about the field of neural networks [wikipedia.org], which is probably the most scientific aspect of AI and machine vision research. This where the more respectable AI folk (e.g. Minsky) are doing their research.

                So yah, I agree with you in general about AI research, but I think that the research centred around classification is the most promising because it's the least ambitious and the most mathematically rig
      • Don't worry, [csiro.au] they're working [csiro.au] on it [csiro.au]...
      • From the Vorbis website:
        Can I bundle Vorbis and another media type (like text lyrics or pictures) in the same file?

        Yes. The Ogg container format was designed to allow different media types to be multiplexed together; Theora will be mixed with Vorbis audio in an Ogg container to encode movies.

        --http://vorbis.com/faq.psp#container/ [vorbis.com]

        Does that mean Ogg too can do what you're suggesting? Probably needs some work still, though.
    • That's what they said about HTML. Annodex is http and html for multimedia.
  • MirrorDot (Score:3, Insightful)

    by Agret ( 752467 ) <alias.zero2097@g ... inus threevowels> on Tuesday February 15, 2005 @05:58AM (#11675993) Homepage Journal
    loser in front of a computer writes "ZDNet Australia reports that 'Australia's CSIRO research organisation has developed a Firefox plugin named Annodex [mirrordot.org] ? [google.com] that allows browsing through time-continuous media [mirrordot.org] ? [google.com] such as audio and video in the same way that HTML allows browsing through text.' I've just checked Annodex out and it's very cool. The sample video from the Perl conference is way funny too." The catch is, the media to be searched has to be prepped first [mirrordot.org] ? [google.com] ... Full Slashdot Story [mirrordot.org]
  • Read more... (Score:5, Insightful)

    by MicroBerto ( 91055 ) on Tuesday February 15, 2005 @06:01AM (#11675999)
    Unfortunately, in order to remain loyalty-free, it only supports Ogg Theora. How many of those videos do you see out there? I see none.

    A cool application, nonetheless.

    • Re:Read more... (Score:4, Interesting)

      by Agret ( 752467 ) <alias.zero2097@g ... inus threevowels> on Tuesday February 15, 2005 @06:05AM (#11676010) Homepage Journal
      I got some Anime in ogg once. It was the Rurouni Kenshin OVA. It was such a wonderful format and I could switch between english/jap audio and subs just by right clicking a system tray icon.

      I really wish the Anime community saw it as a viable format rather than using XVid and DivX for everything. OGG is beautiful.
      • Re:Read more... (Score:5, Insightful)

        by phaxkolumbo ( 572192 ) <phaxkolumbo@gm a i l.com> on Tuesday February 15, 2005 @06:34AM (#11676084)

        Now, I might be wrong, but chances are that what you got instead of Ogg Theora compressed files were Ogg Media Files [faireal.net] (.ogm).

        OGM is a container format for audio/video that supports multiple subtitles (just like you mentioned) and multiple audio tracks. From what I personally know, the video is usually compressed with XviD and the audio with Ogg Vorbis.

        (see also Matroska [matroska.org] which does the above, and more)

      • Re:Read more... (Score:2, Insightful)

        by Buzzard2501 ( 834714 )
        That would be OGM + Ogg Vobis, not Ogg Theora. OGM is a video and audio container like AVI, while Ogg Theora is a video codec (based on VP3 IIRC)
        • Re:Read more... (Score:2, Informative)

          by Agret ( 752467 )
          Ah yes my mistake. They are 250mb OGM files. Oh well it's still beautiful and should be employed as the Anime standard.
      • I really wish the Anime community saw it as a viable format rather than using XVid and DivX for everything. OGG is beautiful.

        Sure, I too would love to see an open format like ogg hit the mainstream, but since CDRs can only hold 700 MB of data, I also want to use that space as efficient as possible. That means using the codec with the highest compression/quality ratio, which unfortunately is not free (as in speech).

      • There are many containers (which is what your .ogm was, just a container) capable of holding multiple audio streams and soft subtitles.
    • Re:Read more... (Score:3, Interesting)

      by Anonymous Coward
      Well now this is just the thing to get more Ogg Theora videos out there. Annodex provides a reason that one would want to use Ogg.

      Although I guess that might present a chicken/egg situation.

      • Well, in my experience Theora's real strength is low-bitrate encoding. At rates where MPEG would just give up and encode big ugly 8x8 blocks, Theora gives you a very reasonable picture. I once tested and found I could get reasonble quality PDA-sized video at about 160k. Unfortunately, at higher bitrates it seems to really lag behing XviD in quality.
      • Surely you meant chicken/ogg situation.
    • Actually being one of the folks who have seen this and the production tools demoed and played with them (in the meeting room at work no less). I have to say its pretty cool.

      And that is supports only ogg theora is a misnamona. The output video is an ogg container with xml packets and theora video interleaved in a format suitable for streaming. The source input video can be anything your system can play back and feed into the encoding/interleaving tools
    • Annodex is a generic encapsulation format and allows for other codecs to be annotated and indexed, too. Also, with the support of media frameworks, authoring tools can transcode from any format to Ogg Theora and Ogg Vorbis.
  • I dunno (Score:3, Insightful)

    by earthbound kid ( 859282 ) on Tuesday February 15, 2005 @06:07AM (#11676016) Homepage
    Isn't the whole point of time-continuous media to watch it through a continued period of time? Putting hyperlinks into a video just turns your web browser into an improved version of the Sega CD or 3DO. I'll admit this technology has its place, but I wonder how big that place is...
  • interactive film (Score:2, Interesting)

    by iddi ( 188484 )
    We're in Russia doing this for video and audio for years. Will not link to sample, as this is bandwidth consuming.
  • Surely... (Score:4, Informative)

    by FirienFirien ( 857374 ) on Tuesday February 15, 2005 @06:18AM (#11676047) Homepage
    'Rewind' and 'fast-forward' already do this? "Time-continuous media" is odd in that it implies something like a stream, yet if the media has to be prepared first, it has to be a complete file. If I could reach the article (seems /. hosed their bandwidth?) I'd check up on this, but:

    The only implication here is that you could skip past part of a stream that exists as a preprepared complete file at the other end (as opposed to radio, which is incomplete and not browsable); but I bet the prepped file is significantly bigger, and the time saved skipping over a boring section would be replaced by the time required to download the extra data.

    Quicktime .mov files also play while still downloading, and work in more browsers than just Firefox; .mov has been around for a while, is already prepped, is easy to convert to with existing programs (free to download) and has various things like crossplatform compatibility.
    • Re:Surely... (Score:1, Informative)

      by Anonymous Coward
      Hi, the "preprepared complete file" is not significantly bigger at all. We store things in standard Ogg bitstream file, with an additional track (logical bitstream, in Ogg speak) to store the extra metadata (CMML) that we use to store the information about each clip. The CMML is absolutely tiny in comparison to the raw audio and video; for the videos we've got there as samples, the CMML consumes perhaps only a few kilobytes (out of maybe a dozen megabytes.)

      We've designed Annodex and CMML with the Interne
  • by atomic noodle ( 814905 ) on Tuesday February 15, 2005 @06:28AM (#11676074)
    Good to see this is open source and works with FireFox, but it's a shame they have to resort to marketing babble and buzzword bingo (see below) to get any media attention for their work. Basically this is YAML (Yet Another Markup Language). They're definitely not the first to do video indexing... search 'VAML', for example.

    Project leader Dr Silvia Pfeiffer, says that the applications of Annodex(TM) are many and varied.

    "Users are discouraged by the complexity of search for clips within vast online multimedia collections. They are demanding a technology that lets them actively search for content," says Dr Pfeiffer.

    "Annodex(TM) and the standards behind it allow them to do just that - it will revolutionise the way we search for time-continuous data. Annodex(TM) also allows video content to be explored using any digitally networked device - including mobile phones, handheld PDAs and digital TV."

    Besides entertainment, Annodex(TM) has many other practical applications such as searching medical information, environmental measurements and network load statistics - on demand."

    The groundbreaking technology behind Annodex(TM) is known as Continuous Media Markup Language (CMML). CMML does for time-continuous media what HTML does for text. It allows the user to search, access, navigate and query.

  • Slashdotted...damn! (Score:2, Interesting)

    by EzInKy ( 115248 )
    So can anybody tell me is this extension for the integrated Mozilla suite or is it only for the standalone browser Firefox?
    • by Anonymous Coward
      The extension is currently only for Firefox. There's no technical reason stopping it from being used Mozilla though (we use Mozilla internally to do a lot of testing); it's only that Mozilla and Firefox extensions must be packaged slightly differently, and we haven't put the time into writing the required install.js script for Mozilla yet. (It's open-source remember, so feel free to contribute and send us patches!)

      - Andre (one of the developers)
  • by sonamchauhan ( 587356 ) <sonamc@NOsPam.gmail.com> on Tuesday February 15, 2005 @06:34AM (#11676087) Journal
    How is this innovative above a DVD "jump to a scene" menu? (honest question)

    I watched the video, but all it seems to be is a system of sectioning audio-visual files into smaller chunks, and a browser that gives access to a "table of contents" that lets the user jump directly to a section.

    Is the sectioning/table-of-content-generation process automated? It seems to be manual.

    I think software is already available that can partially automate the sectioning of a video. It does this by detecting scene-transitions, and then offering up the "chunks" to the user for approval and labelling. I think such software is used in DVD authoring for generating the "Jump to a Scene" DVD menu.

    • by Anonymous Coward
      One difference is, you can jump into a different video, on the same or a different server, not just other places in the same one.

      Or you can search and get links directly into a specific position in a video. eg. With this search engine
      http://labs.panopticsearch.com/search/sea r ch.cgi?c ollection=labs.cmweb

      The section/TOC generation is manual. However in theory it could be automated using scene-detection and speech to text. But you can consider that as part of the original authoring process.
      • Thanks for the reply. Jumping between different streams on different servers is not very different from jumping between different streams on one disk (as in DVDs). Also, I recall Microsoft introducting a standard called web-DVD some years ago to increase the interactivity of DVDs and link them to online content.

        Today, I can listen to streaming audio from an online radio station with Windows media player.
        These stations already section their streams into songs. Media player lets me add individual songs from
        • I got this link [blinkx.tv] from the blinkx.tv site the responder above linked to:

          On page 7:

          Indexing

          blinkx TV uses advanced indexing technology to watch, listen to, and read a video
          or audio signal in real time to build a rich index that you can use to quickly locate
          specific segments within the video content or audio clip. In turn blinkx TV stores
          the information it extracts in metadata tracks in a video index. blinkx TV
          automatically generates metadata tracks to save information generated by the
          media analysis process

    • Real innovation (Score:2, Interesting)

      For proper search in rich media, check out a service like www.blinkx.tv, where the audio is transcibed. No reliance on meta-data, and the sectioning is also automated.
      • Wow - thanks for that ... it's a good link. They must have a *lot* of CPU horsepower dedicated to voice recognition to do straight audio transcription for so many channels.

        I wonder what sort of arrangement Blinkx have with content providers in order for users to view content. I wonder if they also search the closed-captions/teletext as Google Video does. (About a year ago, I also intended doing [slashdot.org] something similar as a hobby project.)
      • That was hilarious, looking through some of their 'transcipts':
        Weather report excertp:
        I know this is the way this OS Linux agency lot of clout through northern parts and through eastern parts and you can see how this is this just pushing its weight and keeping eastern parts of Britain but there's no such plan through central and eastern parts he is going to be bringing a lot of snow
        Anether except:
        the coming into force this week in August tested in the courts they sit unworkable and unenforceable and that is
    • by Anonymous Coward
      The innovation is more that we're going with the Web's model of finding information: i.e., hyperlinks. The capability for hyperlinking is sort-of present in lots of media formats, but we're really the first to advocate it as the future way of putting videos on the Web.

      Additionally, the "freeform" annotation of the video serves very well for searching: it enables marking up a video in a way that's very meaningful for people. Try out the YAPC video on the example web there to see what I mean. The annotati
      • thats great,
        we do similar things, but with proprietary technology...

        actually our implementation of interactive films (or structured video) goes beyond that and allows to link video with any additional information type.

        also we've explored back in 2001-2002 almost all the applications of this technology and made a bunch of samples of various kinds.
    • Ever tried making your own DVD and publish it on a Web server with links to other Web content? This technology allows that sort of thing to happen, but with an open technology, not a proprietary solution.

      Also, it is not actually creating smaller chunks from audio-visual files - the files stay intact! There is a separate language to create the markup (just like HTML for text pages) and once that is created, everything else is automated - also the table-of-content-generation process.
  • A quick epistomological sidenote: what's the opposite of time-continuous media?! All media records an instance of time, whether it's 1 ms or 10 seconds.

    Ok I'm being a pedantic asshole I admit it
    • I am not certain, but my best guess is that it's a buzzword. If I had to impose meaning on it, I'd do so like this:

      Time-continuous media: television shows, movies, radio shows.

      Non-time-continuous media: Paintings, photographs, books(?)
    • You haven't taken the time to look at Kant's Critique of Pure Reason have you?

      Time is a function of our inner experience. To illustrate time in Space, the function of our outer experience, it must be represented on a line. I assume that this would change the direction of the line.

      Imagine watching the a video normally is live following a vertical line, but you can only turn your head horizontally. You can't see where you've been or where you are going.

      Turn the line on its side, so you can see all the s
    • "A quick epistomological sidenote: what's the opposite of time-continuous media?! All media records an instance of time, whether it's 1 ms or 10 seconds."

      An openoffice document does not record an instance of time, yet it is media. There is no specificity in time of an OO document and neither does it change over time.

      Yes there is the instant the document was saved/created but this applies to all media, even video clips but has nothing to do with the media itself ("How longs the DVD? 24th April 2004 10:56:
  • My favorite thing about stories like this is that creates fertile ground on which to find links for warez to enhance whatever the given program of discussion is. That is the purpose of this thread. Its secondary purpose is to give me some karma, as I am in a whorish mood.

    Allow me to kick it off. The following are links for Firefox browsers only as they will install themselves automagically upon click. You've been warned. A couple of these, I forgot which, install links are for the MS Windows platforms sin

    • LinkPreview sounds interesting, but there is no documentation available over it on mozdev.org. Where did you get more information and that xpi link from? not much here... [mozdev.org]
      • I had to google cache the thing and I eventually got it. Basically, when you load up a page, it checks in with alexa.com and collects any thumbnails the site may have of either the specific page you want to go to or at least the front index of the domain. I think it does this with links as soon as the page is loaded, but it might only do one at a time on the mouseover. To tweak its settings, like the thumbnail server and the thumbnail size, you go to Tools > Extensions > [select it and hit options].

        W

    • by Anonymous Coward
      • LinkPreview will pop up thumbnail preview images [most of the time] when you mouseover a link. Frickin awesome. Requires restart.

      Your link is to an older version (1.2)

      From the changelog:

      • 1.3 - Major bug fix (random Firefox crashes), many thanks to Mark.

      So if you want to try it, better get 1.3: http://patsis.brownhost.com/hpxpi/linkpreview13.xp i [brownhost.com]

    • "most of you suckers use Windows even though this site is about Linux leetness"

      Arse. Does that mean I have to leave now because Windows works on my PC?
    • Thank-you, kind sir - a most excellent post, and bad karma to those who down-modded it!

      Find every single Firefox extension in the world here [mozilla.org]

  • Are there any legal restrictions on the indexing of files? I can see a lot of companies becoming upset at having their media prepared in such a way..
  • How it really works (Score:5, Informative)

    by EEproms_Galore ( 755247 ) on Tuesday February 15, 2005 @07:37AM (#11676241)
    Ive actually seen this in action and most of you are right off track. This isnt a streaming only format nore is it a DVD media replacement. It s a interactive web based media format. Imagine your watching a lecture and during the lecture lest say "Open Source" is mentioned. The author can put a pop up link in the video stream with "Learn more about Open Source" click on the link and you get a short video about open source then it goes back to the main lecture. No getting stuck having to pause the video stream while you look up a term.
  • by frostman ( 302143 ) on Tuesday February 15, 2005 @08:13AM (#11676391) Homepage Journal
    This could be really useful for TV broadcasts, particularly news.

    I think anybody doing closed captioning [robson.org] already has the descriptive content they need. (Others could use a similar process to create it.)

    That info, combined with relatively easily-detectable scene transitions, would make it possible to automate the searchable video file creation to a large extent.

    So the CC or equivalent would still have to be done manually but you'd have this extremely useful, huge searchable archive of video.

    Not so easy for things that depend on the visual content as opposed to the spoken content, but for news it could be amazing.

    Then watch as politicians and captains of industry squirm [ntk.net] at the thought that their every word and twitch is available for searching...
  • https://addons.update.mozilla.org/extensions/morei nfo.php?application=firefox&version=1.0&os=Windows &id=451
  • by maggard ( 5579 ) <michael@michaelmaggard.com> on Tuesday February 15, 2005 @10:36AM (#11677547) Homepage Journal
    Everything promised is already possible using the Synchronized Multimedia Integration Language (SMIL [w3.org]) standard from W3C.

    What's more SMIL is already [w3.org] supported by Quicktime, Real, MS Media Player, & MS Internet Explorer (& Firefox with some effort).

    For platforms SMIL is available on Linux, Linux/PDA, Windows, Windows CE, MacOS, & MacOS X.

    For content creation numerous SMIL tools are out there, inlcuding most industry standard ones.

    For those curious here's a SMIL tutorial [empirenet.com], in SMIL.

    • SMIL is a language to author multimedia experiences (it's "Interaction Language", not "Integration Language"). Although SMIL 2 has metadata in it, SMIL 2 is so complicated that it's not generally used. I've not seen search engines crawling SMIL files and delivering clips as matches!

      When HTML was started, SGML was around and did all that HTML did, too - and lots more. Do you remember SGML?
      • I've seen SMIL used, indeed it's pretty much the standard for structuring on-demand web video with credits, intros, & ads before or after the feature.

        As to SMIL typically not including useful (search engine friendly) metadata, that's more an issue with authors not taking advantage of this then a shortcoming in the format.

        Presumably with multimedia search becoming a standard service we'll see sites getting smart about exposing their resources and attracting users.

        But, yes I recall SGML - I used to

  • I could swear I gave a talk on something very similar back in 2000 at ACM Hypertext. The OvalTine project at UNC (no, don't ask, it was a silly name) allowed a user to markup video by tracking faces/heads automatically from frame to frame, and attaching a URL to them once. (You could also attach URLs to any other object in the video stream, but the object tracker we had running best at that time was for heads.) Clicking on any object that was so tagged would pop up a browser window (or the appropriate ap
    • Every encapsulation format has its advantages and disadvantages. We preferred Ogg and its open codecs because it allows streaming without having to manipulate the file format. Ogg is flat and sequential, while QuickTime is hierarchical. With Annodex, you can embed annotations on the fly while streaming.

      Also, there are development tools available as open source code on all platforms for Ogg, while QuickTime is not available on Linux, which is where we started developing.

      BTW: With Annodex, links are als
      • Thanks, I was having a heck of a time getting through to the details due to that lovely /. effect. :)

        I'm curious though - why couldn't you annotate while streaming with MPEG-4? (There are a number of open source projects to manipulate MPEG-4, which offers the same sort of multi-track format QT does. *We* just happened to be using QT because back in '98 when OvalTine was started, that was the only thing out there that came close to filling the bill. Now, MPEG-4 does, no QT necessary.) We looked into it,
  • The catch is, the media to be searched has to be prepped first.

    Holy fuck, that's just like saying text files have to be "prepped [w3.org]" before they can be part of a global hypertext system [w3.org].

    Dear god, whatever shall we do.

  • I, for one, welcome our new Multimedia Hyperlinked World Video Web Overlords.

    I suggest to call them WiVi.
  • I've looked through the docs included in the docs, and the website seems Slashdotted. In what kind of parameters is the index produced? That is, what kinds of questions can a searcher ask of the index? Like audio tempo/BPM, or style signatures (pitches/rhythms), or some other human-recognizable values? I don't expect it can search for semantics, like "find the songs with the words 'darling' and 'chainlink' in this directory". Once indexed, in what kinds of nonexpert terms can the content be searched?
    • After digging through the painfully slow website (how do they expect to serve media files?), it looks like Annodex is kinda BS. I see no evidence of meaningful analysis of the data for searching - just ways to visualize the same old DSP of time/spectrum power data. And the Annodex format itself is just a metadata interleave format that lets you insert the XML you create by hand into the video, with a detailed timecode URI format for fragments. So what? HTTP does byte-range requests, so I can index my own co

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...