Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

Podcast App Breaker Adds Support For JSON Feed, Believes the RSS Alternative Could Benefit Podcast Ecosystem (medium.com) 57

Erik Michaels-Ober, the creator of popular podcast app Breaker: The decentralized structure of podcasts creates a chicken-and-egg problem for JSON Feed to gain adoption. There's no incentive for podcasters to publish in JSON Feed as long as podcast players don't support it. And there's no incentive for podcast players to support JSON Feed as long as podcasters don't publish in that format. Breaker is hoping to break that stalemate by adding support for JSON Feed in our latest release. As far as we know, Breaker is the first podcast player to do so. Unlike other features that differentiate Breaker, we encourage our competitors to follow our lead in this area. The sooner all podcast players support JSON Feed, the better positioned the entire podcast ecosystem will be for the decades to come. JSON is more compact than XML, making it faster for computers to transfer and parse, while making it easier for humans to read and write. Updating Breaker to support JSON Feed was fun and easy. It took us less than a day from when we started working on it to when the change was submitted to the App Store. Update: Julian Lepinski, creator of Cast (an app that offers the ability to record, edit, publish and host podcast files), announced on Tuesday: Like a lot of software, much of Cast's internal data is stored in JSON, and publishing JSON data directly would be pretty straightforward as a result. So I sunk my teeth in, and in about half a day I'd added experimental JSON Feed support to podcasts published with Cast.
This discussion has been archived. No new comments can be posted.

Podcast App Breaker Adds Support For JSON Feed, Believes the RSS Alternative Could Benefit Podcast Ecosystem

Comments Filter:
  • At this point, Podcast App Breaker is following the herd; anybody who's been tracking it has seen most of the actively-developed feed readers support the new JSON feed format - and why not - it's something 'new' for the developers to do, so it's a bit more fun than the normal drudgery.

    The problem that remains, though, is that there aren't too many publishing apps that use it yet.

    • JSON is not a new format, and the fact that it is for rejigging RSS feeds is proof that people don't understand how it is just another format.

      • Podcast should also support UUencoded streams. Because if wasting 3x the bandwidth for JSON is a good idea, then so is UUencoding.

  • I really don't think the parsing speed of RSS's XML is going to be an issue here...
    • I really don't think the parsing speed of RSS's XML is going to be an issue here...

      The problem isn't reading it. It's building the DOM that goes behind anything XML. That DOM incurs a slight overhead. Building a DOM, giving it all the abilities to move forward, backwards, n-th node, etc is what *some* people have massive issues with. Now that sounds a lot like an issue with the thing that's in charge of building the DOM and you'd be correct. Lot's of XML libraries have tons of things that they automatically do that no one needs, but really some of that can be argued for JS engines as

  • Isn't compression more effective for XML as it would reduce the redundancy and likely eliminate the file size advantage of JSON.

    I would also argue that in many ways XML is easier for humans to write than JSON unlike the supposition in the summary.

    • JSON's size advantage over XML comes largely from not having to repeat an element's tag name at the end of each non-empty element. Compression eliminates some of this advantage but not all. For one thing, more efficient encoding before compression allows more source data to fit into Gzip's 32K window. For another, the compressor doesn't have to spend bits on a backward reference for each end tag, and the backward references it does emit can be shorter because they refer to more recent data. It's the same re

  • This is probably hopeless, but I'm still waiting for someone to explain what this new feed format does above and beyond the XML one. So far the entire argument is "Because it's JSON!!111eleventy!"

    The whole complaint about dealing with malformed XML isn't going to be fixed with this new format. If people are malforming their XML, then they're also going to be malforming their JSON too.

    • The whole complaint about dealing with malformed XML isn't going to be fixed with this new format. If people are malforming their XML, then they're also going to be malforming their JSON too.

      And smart programmers are already using a feed parsing library that's using an XML parsing library that has relaxed parsing logic, to handle malformed XML.

      Guess what - if your feed is so broken that the popular podcatchers can't handle it, nobody is going to listen to your show anyway; it's not like incentives don't ex

      • Maybe by the end of the year we'll be reading about bored developers who claim email is unusable crap because it's not a JSON feed.

        You take that back right this second, least someone actually read that and get an idea about better email!

    • by Jack9 ( 11421 )

      > I'm still waiting for someone to explain what this new feed format does above and beyond the XML one

      It provides superior readability (debugging), simpler format (more compact in terms of raw bytes), less specification in a format leads to more specific implementation (which is what most people end up doing with XML anyway), and better 3rd party support (XML parsing behavior is dependent on many many choices, where JSON has far fewer). This is one of those, less common, cases where less specificity in a

      • by radish ( 98371 )

        Compare the amount of code in a Lua JSON decoder to any specific Lua XML reader

        And now every client needs both. Yay?

      • I have to disagree with most of that.

        The problem (for me) is that those are completely arbitrary judgement calls that don't actually result in more correct output or development.

        Sure, XML is verbose, sometimes ridiculously so. But I cannot believe that anyone would consider it less readable and debuggable than a sufficiently complex JSON structure that quickly digs into quote, comma, and bracket hell. JSON is more *concise*, yes. But clearer? I absolutely beg to differ. It's like arguing "Should I put

        • by Jack9 ( 11421 )

          > The problem (for me) is that those are completely arbitrary judgement calls that don't actually result in more correct output or development.

          Is XML over-structured? Is JSON under-structured? Is there somewhere in the middle? I'm not sure these questions have value.
          People (including programmers) are lazy and looking through a standard to try to decide how to "properly" address a problem within a standard (XML). Is my implementation right? What if I missed something? That cursory overhead (of XML standar

  • The chicken-and-egg problem is easily solved in this case:

    "What chicken?"

  • JSON is more compact than XML, making it faster for computers to transfer and parse

    Technically correct, but seriously you're going to use that as a main part of your argument? ZOMG post the benchmarks. What's the difference measured in on today's computing power, microseconds? nanoseconds? Give me a break. I'll give you points for creativity and humor.

  • JSON is more compact than XML, making it faster for computers to transfer and parse, while making it easier for humans to read and write

    I don't know about you, but I think there's something wrong if you're manually editing the XML or JSON files by hand everytime you post a new podcast. It might work for those once in a while podcasts that come around monthly or less, but I can't see someone actually not using a program to maintain the file, or having it automatically generated...

  • by Hydrian ( 183536 ) on Tuesday May 30, 2017 @12:27PM (#54511965) Homepage

    XML didn't kill RSS Feeds. Switching to JSON isn't going to help it either. What is killing RSS Feeds? It is the big social media data silos. Facebook, Google+, maybe Twitter.

    Facebook is the 200lbs elephant in the room so I'll point to them. Instead of letting end-users select what RSS feeds / 'subscriptions' they wanted to add to their timeline, Facebook made their own non-standardize API that that content authors need to work with in order to let the end-users access the content the way they want. Google+ did the same thing. This takes time and energy from the content creators which is a limited resource. Instead of building an RSS aggregator in to their social media site, those companies decided to create custom APIs that can only be used with 'their' social media site. All of these moves are to get you consuming on their site and not how you'd want consume it.

    • by Anonymous Coward

      Bingo - honestly, JSON vs XML vs ASN1 binary files isn't what's killing RSS feeds. (Heck, we're supposed to be using ATOM instead of RSS anyway!).

      It's the fact that something like Facebook and Twitter is how people get their feed these days, and none of it interacts with RSS. Even blogs have given way to Twitter follows, to YouTube channels, to Facebook groups.

      Get your heads out of your backside and look around.

      Anyway..
      "JSON is more compact than XML, making it faster for computers to transfer and parse,.

      • What sort of an ignorant, stupid statement is that? GZIP'd XML is more compact than uncompressed XML, so which do you think is faster to parse?

        I once (early 2000s?) benchmarked the fastest XML parser I could get my hands on against an S-expression parser. The S-expression parser was several times faster. Even worse, I was just firing NOP events in the SAX parser. That made a lasting impression on me as to what a clusterfuck of a standard XML has to be.

      • ... GZIP'd XML is more compact than uncompressed XML, so which do you think is faster to parse? Yeah, the uncompressed (LARGER) XML.

        That's not strictly accurate, whatever the base format is. Whether the uncompressed version is faster to parse depends greatly on memory speed. It can take longer to pull in the uncompressed version and parse it than the compressed version due the time wasted fetching from higher-level, slower memory.

        It's not intuitive that decompression + parsing should be faster than parsing alone, but it is far from improbable.

  • Like a lot of software, much of Cast's internal data is stored in JSON,

    Yeah, not really eh. XML, JSON, etc etc are all serialisation formats, and 'internal' data isn't normally how we refer to the data stored on disk. Podcast feeds are XML, stop trying to break them. Do you want to serve HTML up as JSON too?

  • I'm pretty zealous when it comes to the benefits of json over RSS, but the main point is that json is clearer, more compact, and simpler. Benefits that don't really apply here.

    RSS is entirely computer generated and computer decoded. It is an established schema. It is widely supported, and every platform that could possibly play a podcast has XML libraries.

    The size hardly makes a difference - I checked one of the feeds I like; it was 83Kb, and full of podcasts that were in the 8-25MB range. And most of

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...