Podcast App Breaker Adds Support For JSON Feed, Believes the RSS Alternative Could Benefit Podcast Ecosystem (medium.com) 57
Erik Michaels-Ober, the creator of popular podcast app Breaker: The decentralized structure of podcasts creates a chicken-and-egg problem for JSON Feed to gain adoption. There's no incentive for podcasters to publish in JSON Feed as long as podcast players don't support it. And there's no incentive for podcast players to support JSON Feed as long as podcasters don't publish in that format. Breaker is hoping to break that stalemate by adding support for JSON Feed in our latest release. As far as we know, Breaker is the first podcast player to do so. Unlike other features that differentiate Breaker, we encourage our competitors to follow our lead in this area. The sooner all podcast players support JSON Feed, the better positioned the entire podcast ecosystem will be for the decades to come. JSON is more compact than XML, making it faster for computers to transfer and parse, while making it easier for humans to read and write. Updating Breaker to support JSON Feed was fun and easy. It took us less than a day from when we started working on it to when the change was submitted to the App Store. Update: Julian Lepinski, creator of Cast (an app that offers the ability to record, edit, publish and host podcast files), announced on Tuesday: Like a lot of software, much of Cast's internal data is stored in JSON, and publishing JSON data directly would be pretty straightforward as a result. So I sunk my teeth in, and in about half a day I'd added experimental JSON Feed support to podcasts published with Cast.
Re: (Score:2)
Are you sure?
This is nothing more than the latest "Mac vs PC" or "Windows vs Linux" argument. JSON vs RSS
Re:Because propaganda (Score:5, Interesting)
Hey, I'm the one who submitted the story a few days back. I am neither the creator of the format, nor do I have no relation to the guys who created JSON Feed, nor do I have any relation to whoever submitted this summary today. Check my comment history. I'm just a guy who's been around Slashdot for way too long and thought it was weird that this format I was seeing reported everywhere else for an entire week hadn't yet been reported on Slashdot. While I had heard of the two guys behind JSON Feed prior to its announcement, I don't know either of them personally or professionally; I don't follow either of them via blogs, podcasts, Twitter, or anything else; and I don't have any vested interests in the format, other than liking that we finally have some movement in that space. Believe me or not. It's no skin off my nose.
As for the replies to the story, frankly, most of the replies were clearly from people, such as yourself, who hadn't bothered following the links I provided, since they latched onto the incorrect notion that it's just a reimplementation of RSS in JSON. I'll grant that I should've done a better job of making it clear that wasn't the case in my summary (also worth noting: I simply wouldn't have submitted the (non-)story if all it was was RSS in JSON), and I think it's unfortunate that the name of the format is "JSON Feed", since that seems to be driving much of the confusion, but there IS more to it than just "wannabe-rss-replacement-in-json", which you'd know if you had actually read any of the stuff I linked.
Not that I expected you or anyone else to do so, of course. After all, this is still Slashdot, and no one here reads the articles. ;)
Re: (Score:2)
Oops, typo. Instead of...
nor do I have no relation to the guys who created JSON Feed
...it should instead be...
nor do I have a relation to the guys who created JSON Feed
Obviously, the distinction is important.
One of many (Score:2)
At this point, Podcast App Breaker is following the herd; anybody who's been tracking it has seen most of the actively-developed feed readers support the new JSON feed format - and why not - it's something 'new' for the developers to do, so it's a bit more fun than the normal drudgery.
The problem that remains, though, is that there aren't too many publishing apps that use it yet.
Re: (Score:2)
JSON is not a new format, and the fact that it is for rejigging RSS feeds is proof that people don't understand how it is just another format.
Re: (Score:1)
Podcast should also support UUencoded streams. Because if wasting 3x the bandwidth for JSON is a good idea, then so is UUencoding.
Unicode safety (Score:2)
How does JSON waste three times the bandwidth compared to some bloated XML shite from 1999 or whatever? XML
Let me hazard a guess as to what HornWumpus might have meant:
XML can be encoded either in UTF-8 or in ASCII with numeric character references (such as я). JSON can be encoded either in UTF-8 or in ASCII with escape sequences (such as \u044F). But many JSON libraries, such as the one in PHP, use escape sequences by default [stackoverflow.com] to fit safely through a channel with any encoding that has ASCII as a subset. If your XML library defaults to UTF-8 but your JSON library defaults to escaping, and you are encodi
Faster? (Score:2)
Re: (Score:2)
I really don't think the parsing speed of RSS's XML is going to be an issue here...
The problem isn't reading it. It's building the DOM that goes behind anything XML. That DOM incurs a slight overhead. Building a DOM, giving it all the abilities to move forward, backwards, n-th node, etc is what *some* people have massive issues with. Now that sounds a lot like an issue with the thing that's in charge of building the DOM and you'd be correct. Lot's of XML libraries have tons of things that they automatically do that no one needs, but really some of that can be argued for JS engines as
Compression (Score:2)
Isn't compression more effective for XML as it would reduce the redundancy and likely eliminate the file size advantage of JSON.
I would also argue that in many ways XML is easier for humans to write than JSON unlike the supposition in the summary.
Re: (Score:2)
Minification before compression (Score:2)
JSON's size advantage over XML comes largely from not having to repeat an element's tag name at the end of each non-empty element. Compression eliminates some of this advantage but not all. For one thing, more efficient encoding before compression allows more source data to fit into Gzip's 32K window. For another, the compressor doesn't have to spend bits on a backward reference for each end tag, and the backward references it does emit can be shorter because they refer to more recent data. It's the same re
Re: (Score:2)
I doubt someone would be able to notice the parsing speed difference even on a 16MHz Arduino.
Re: (Score:2)
What the Arduino is gonna need is more Memory*
*Recent Experience in parsing a JSON into a stack of arrays in order to execute a simple loop to display on LED strands. While the Arduino could parse the JSON, and get it stacked into the arrays correctly, there was barely enough RAM to load the libraries needed to drive the LEDs. Eventually was able to get it to run, but gnarly ugly code.
Re: (Score:2)
AFAIK libraries aren't loaded into RAM, they're compiled to the flash memory [wikipedia.org] for the code.
Why? (Score:2)
This is probably hopeless, but I'm still waiting for someone to explain what this new feed format does above and beyond the XML one. So far the entire argument is "Because it's JSON!!111eleventy!"
The whole complaint about dealing with malformed XML isn't going to be fixed with this new format. If people are malforming their XML, then they're also going to be malforming their JSON too.
Re: (Score:2)
He should update that to now include USB-C
Re: (Score:2)
The whole complaint about dealing with malformed XML isn't going to be fixed with this new format. If people are malforming their XML, then they're also going to be malforming their JSON too.
And smart programmers are already using a feed parsing library that's using an XML parsing library that has relaxed parsing logic, to handle malformed XML.
Guess what - if your feed is so broken that the popular podcatchers can't handle it, nobody is going to listen to your show anyway; it's not like incentives don't ex
Re: (Score:2)
Maybe by the end of the year we'll be reading about bored developers who claim email is unusable crap because it's not a JSON feed.
You take that back right this second, least someone actually read that and get an idea about better email!
Re: (Score:2)
> I'm still waiting for someone to explain what this new feed format does above and beyond the XML one
It provides superior readability (debugging), simpler format (more compact in terms of raw bytes), less specification in a format leads to more specific implementation (which is what most people end up doing with XML anyway), and better 3rd party support (XML parsing behavior is dependent on many many choices, where JSON has far fewer). This is one of those, less common, cases where less specificity in a
Re: (Score:2)
And now every client needs both. Yay?
Re: (Score:2)
I have to disagree with most of that.
The problem (for me) is that those are completely arbitrary judgement calls that don't actually result in more correct output or development.
Sure, XML is verbose, sometimes ridiculously so. But I cannot believe that anyone would consider it less readable and debuggable than a sufficiently complex JSON structure that quickly digs into quote, comma, and bracket hell. JSON is more *concise*, yes. But clearer? I absolutely beg to differ. It's like arguing "Should I put
Re: (Score:2)
> The problem (for me) is that those are completely arbitrary judgement calls that don't actually result in more correct output or development.
Is XML over-structured? Is JSON under-structured? Is there somewhere in the middle? I'm not sure these questions have value.
People (including programmers) are lazy and looking through a standard to try to decide how to "properly" address a problem within a standard (XML). Is my implementation right? What if I missed something? That cursory overhead (of XML standar
JSLT (Score:2)
XML can transform with XSLT clientside
So can JSON, through JSLT. To get started with JSLT, see Vanilla JS [vanilla-js.com] and DOM Intro [mozilla.org].
(Hint: JSLT is just JavaScript that builds a DOM.)
<link rel="alternate"> (Score:2)
I concede that JSON has no exact counterpart to <?xml-stylesheet ?> processing instruction. Instead, it needs an HTML file to kick off the transformation. However:
This means that your human readers go to a different url than machine readers
In the special case of a feed, both the human and machine readers go to the HTML version, be it HTML5 or XHTML. The machine reader finds the <link rel="alternate"> element whose type attribute has a supported value and adds the feed's URI from the value of its href attribute. The browser used by a human reader instead looks for a <s
Applying the XSLT or not (Score:2)
There is no link tag: the file they see IS the feed, presented as very nice HTML via XSLT*. If they are savvy enough to want to load it in a feed reader, then they just copy that URL into their feed reader.
When a web browser parses the RSS feed, it applies the XSLT referenced in the <?xml-stylesheet ?> processing instruction to transform it into XHTML. How does a feed reader know not to apply the XSLT when it parses the same feed? Or does it apply each <?xml-stylesheet ?> processing instruction in turn and accept the first one yielding a root element in an XML namespace that it understands?
Is it possible to produce more than RSS and XHTML from one feed?
and doesn't even need javascript enabled.
But it does need XSLT enabled. Wouldn't some
Accept: negotiation (Score:2)
This means that your human readers go to a different url than machine readers
Not necessarily. Human readers will send HTTP requests whose Accept: header lists text/html before application/json, machine readers vice-versa. Then the server can use media type negotiation to serve HTML to humans or a feed to machines. A server behind a public cache, such as a server using a CDN or cleartext HTTP, does need to send Vary: Accept in the response [stackoverflow.com] so that the cache gets both the human and machine versions.
chicken-and-egg problem (Score:2)
The chicken-and-egg problem is easily solved in this case:
"What chicken?"
JSON vs. XML benchmarks (Score:2)
JSON is more compact than XML, making it faster for computers to transfer and parse
Technically correct, but seriously you're going to use that as a main part of your argument? ZOMG post the benchmarks. What's the difference measured in on today's computing power, microseconds? nanoseconds? Give me a break. I'll give you points for creativity and humor.
Humans to read and write? (Score:2)
I don't know about you, but I think there's something wrong if you're manually editing the XML or JSON files by hand everytime you post a new podcast. It might work for those once in a while podcasts that come around monthly or less, but I can't see someone actually not using a program to maintain the file, or having it automatically generated...
Re: (Score:2)
The problems isn't XML vs. JSON. It's data silos. (Score:5, Insightful)
XML didn't kill RSS Feeds. Switching to JSON isn't going to help it either. What is killing RSS Feeds? It is the big social media data silos. Facebook, Google+, maybe Twitter.
Facebook is the 200lbs elephant in the room so I'll point to them. Instead of letting end-users select what RSS feeds / 'subscriptions' they wanted to add to their timeline, Facebook made their own non-standardize API that that content authors need to work with in order to let the end-users access the content the way they want. Google+ did the same thing. This takes time and energy from the content creators which is a limited resource. Instead of building an RSS aggregator in to their social media site, those companies decided to create custom APIs that can only be used with 'their' social media site. All of these moves are to get you consuming on their site and not how you'd want consume it.
Re: (Score:1)
Bingo - honestly, JSON vs XML vs ASN1 binary files isn't what's killing RSS feeds. (Heck, we're supposed to be using ATOM instead of RSS anyway!).
It's the fact that something like Facebook and Twitter is how people get their feed these days, and none of it interacts with RSS. Even blogs have given way to Twitter follows, to YouTube channels, to Facebook groups.
Get your heads out of your backside and look around.
Anyway..
"JSON is more compact than XML, making it faster for computers to transfer and parse,.
Re: (Score:2)
What sort of an ignorant, stupid statement is that? GZIP'd XML is more compact than uncompressed XML, so which do you think is faster to parse?
I once (early 2000s?) benchmarked the fastest XML parser I could get my hands on against an S-expression parser. The S-expression parser was several times faster. Even worse, I was just firing NOP events in the SAX parser. That made a lasting impression on me as to what a clusterfuck of a standard XML has to be.
Re: (Score:2)
... GZIP'd XML is more compact than uncompressed XML, so which do you think is faster to parse? Yeah, the uncompressed (LARGER) XML.
That's not strictly accurate, whatever the base format is. Whether the uncompressed version is faster to parse depends greatly on memory speed. It can take longer to pull in the uncompressed version and parse it than the compressed version due the time wasted fetching from higher-level, slower memory.
It's not intuitive that decompression + parsing should be faster than parsing alone, but it is far from improbable.
Not like alot of software (Score:2)
Like a lot of software, much of Cast's internal data is stored in JSON,
Yeah, not really eh. XML, JSON, etc etc are all serialisation formats, and 'internal' data isn't normally how we refer to the data stored on disk. Podcast feeds are XML, stop trying to break them. Do you want to serve HTML up as JSON too?
Are people editting rss by hand? (Score:1)
RSS is entirely computer generated and computer decoded. It is an established schema. It is widely supported, and every platform that could possibly play a podcast has XML libraries.
The size hardly makes a difference - I checked one of the feeds I like; it was 83Kb, and full of podcasts that were in the 8-25MB range. And most of