OASIS Approves OData 4.0 Standards For an Open, Programmable Web 68
First time accepted submitter Dilettant writes "The OASIS members approved Version 4 of the OData standards, which now also feature the long requested compact JSON as primary format. OData helps "simplifying data sharing across disparate applications in enterprise, cloud, and mobile devices" through interfacing data sources via a REST like interface."
JSON Sucks (Score:4, Insightful)
For computer-to-computer data interchange, JSON is not bad. But it's about as human-readable as the Voynich Manuscript.
Re: (Score:3, Insightful)
You actually prefer XML???????
Re: (Score:1)
Using XML is like sticking your nuts in a vice and squeezing them until they burst. Although in the end it's still more pleasant than using XML.
Re:JSON Sucks (Score:5, Insightful)
Absolutely. XML is much more mature. XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language. All JSON really has going for it is that it's already JavaScript.
The funny thing is that, at the end of the day, JSON and XML are the same thing, only syntactically different. Yet the prevailing opinion seems to be that XML is absolute and total shit whereas JSON is some golden calf.
Re: (Score:1)
How so, specifically? I've never had an issue with it, but then I don't use bullshit scripting languages that force me to do lots of XML processing, let my tools do it for me.
So maybe you should rephrase - if you're using c. 1992 scripting languages, XML is total shit.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
But it's easier now.
Re: (Score:2)
But the problem is -- as I mentioned in a post further up the page -- that JSON throws away some data type information. So when I use JSON, I have to reconstruct some of my data types when I use from_json. But I don't have to do that with XML.
And that's definitely a problem with JSON, not Ruby.
Re: (Score:3, Interesting)
"The funny thing is that, at the end of the day, JSON and XML are the same thing, only syntactically different."
Yes, exactly. But XML is readable by people. JSON is not. Just try to read any big dataset in JSON, especially if it's minified. Good luck. At least with XML you have a shot.
Having said that: there are lots of good tools for converting from one to the other, so it could be a lot worse.
You make a good point about standards and validation, though, too. That's why business data interchanges are generally built on XML, and not JSON. Even though JSON is generally more efficient.
Re: (Score:1)
Yes, exactly. But XML is readable by people
No it isn't. Neither of them is. Not directly. They are *both* readable by humans with a good browser/editor. Tell the editor guys to get crackin' if they haven't already. When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it. Visual Studio even does that with C for cryin' out loud. Class graph browsers for C++ have been out for like... forever. I don't work with XML or JSO
Re: (Score:2)
"No it isn't. Neither of them is. Not directly."
Yes, it is. If you don't believe me, I have posted a link to a simple example below. Not only is the JSON harder to read, it throws away data type information unnecessarily.
"When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it."
This is old stuff. TextMate (just for one example), has been doing that for a long time. Here's an example [postimg.org]. Notice how the line numbers skip where the code is collapsed.
You can also open XML in Firefox, and again it does exactly the same thing: you can expand and collapse levels at will.
Re: (Score:2)
XML is much more mature. XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language.
Which means that XML will still be around in 10 years, and can safely be used today for major projects.
Re: (Score:3)
Re: (Score:1)
IF the situation calls for HUMANS to read the data, I sure as hell do prefer XML. No contest. JSON is virtually unreadable.
Like I said: it's fine for computer data interchange, but when it comes to human intervention, give me XML any day.
I'm not claiming XML is perfect, by any stretch of the imagination. But when humans rather than computers need to deal with the data, it beats the shit out of JSON.
Re: (Score:1)
How is JSON hard to read? It's just lists of key/value pairs
Re: (Score:1, Insightful)
Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read.
Re: (Score:3)
"Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read."
Key-value pairs are a tiny subset of all data types. There are many data types that they have to struggle to represent very well. And when they try, the result (to the human eye) is a huge mess.
You're entitled to your opinion of course. But I think you're looking at it from a very narrow perspective. Have you ever actually had to program for the exchange of complex data sets? By that I mean something quite a bit more involved than a web store?
Re: (Score:2)
"Yes. I'm lucky LISP can parse XML since they are really only just a special case of S-Expressions. Once out of that horrid mess of printer tags it was much more straightforward to validate them and insert them in all their complexity into a nicely normalized relational database."
You are conflating XML and SGML. While technically XML is a subset of SGML, it doesn't contain "printer tags". It literally doesn't have any. XML tags are strictly data description.
Saying that XML is SGML is kind of like saying "car" is LISP. The former is a clearly-specified tool used for certain specific things. The latter is a generalized tool for many things. You wouldn't write an entire language like LISP to perform the function car performs. Nor would you write a specification like SGML (which does
Re: (Score:2)
"No it is not weird. XML is weird because it contains and is based on printer control cruft. Lots of printer control cruft. An unnecessary tag is a tag is a tag is a fucking tag."
It does nothing of the sort. XML is a data description language. It's parent language -- SGML -- had a LOT of printer specification stuff in it. But XML has NONE. Not one little bit.
Jeez, guy. Pick up a book.
"An unnecessary tag is a tag is a tag is a fucking tag."
Then show me how to do the same thing without those tags that you call "unnecessary". Where are you going to get the information necessary to validate your data?
I linked to an example further up the page. XML preserved the data type, while JSON just turns any data it doesn't understand into a stri
Re: (Score:2)
XML has structures, standards, validation and flexibility that JSON sorely lacks. As someone else wrote above, the main thing JSON has going for it is that it's already JavaScript. Big deal.
I linked to a clear example further up. XML preserved my simple data structure. JSON threw away information about my data that I would have to supply myself later, if I were to use JSON to
Re: (Score:2)
Neither JSON nor XML is easily writable without special tools.
YAML attempts to be writable, but the grammar and parser are huge and slow.
RSON [google.com] is a superset of JSON that is eminently readable/writable, and much simpler than YAML, allowing, for example, for human-maintained configuration files.
The reference Python parser operates about as fast as the unaccelerated Python library pure JSON parser.
Re: (Score:2)
"Neither JSON nor XML is easily writable without special tools. "
Sure they are. Take just about any object in Ruby and call [object].to_xml or [object].to_json.
More relevant to the discussion though, I think, is what someone else said above:
"XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language. All JSON really has going for it is that it's already JavaScript."
I would have to say the same for RSON.
While it is true that they are syntactical versions of one another, XML is far less ambiguous. In a way, XML versus JSON is a lot like Java versus JavaScript. The former have more tightly defined specifications, and less ambiguity. (I.e., Java will not let you treat a string like an integer
Re: (Score:2)
But I guess I suck at that myself, since we're obviously not communicating properly.
Obviously there are libraries in all sorts of languages to read/write both.
Re: (Score:3)
You actually prefer XML???????
Yes, as I deal in data interchange all the time, XML is great as it allows schema definition/sharing (XSD) and XSLT is a mature transformation language, that, after many years in the woods, is now available with functional capabilities (XSLT v3.0).
The only problem we have is that often, endpoint partners/vendors don't provide the XSD, nor do they share how they plan to validate files we send them. Or they ignore our XSD. But I still can't imagine things would be better if JSON were the interchange format.
Re: (Score:2)
As far as which is best visually... XML is a bit wordy/busy, especially if it uses a lot namespaces, but it's a pretty minor problem given that with both XML and JSON, it's a piece of piss to write a nice visual editor.
The important thing for me is having a solid platform for building applications, and XML has the capability and maturity for that - even
Re: (Score:2)
Does JSON support namespaces? AFAIK it doesn't, and that would seem to make it suitable only for fairly simple data interchange and not really scalable. As far as which is best visually... XML is a bit wordy/busy, especially if it uses a lot namespaces, but it's a pretty minor problem given that with both XML and JSON, it's a piece of piss to write a nice visual editor. The important thing for me is having a solid platform for building applications, and XML has the capability and maturity for that - even if it is a bit ugly!
I know it's bad-form replying to my own post, but it does appear that there is some kind of namespacing going on in the OData spec [oasis-open.org]. Does anyone know if this namespacing is part of the JSON standard, or is it just a convention that OASIS are using?
:D
Eitherway, I still prefer XML!
Re: (Score:1)
Yay! (Score:3)
Good!!!
}
They cracked the code on good web programming standards lol.
Re: (Score:1)
If (PlatformIndepenentProgrammingRelated == True) And (RelatedToJava == False) {
Good!!!
}
They cracked the code on good web programming standards lol.
string message = "";
If (PlatformIndepenentProgrammingRelated &&
!RelatedToJava)
{
message = "Good!!";
}
Re: (Score:1)
std::string message because I hate the guy who gets stuck maintaining things
Considering its the std:: c++ library, you should enable it correctly in stdafx.h for your whole project.
#include
using namespace std;
Re: (Score:2)
Ok, you have types with leading lowercase letters, variables with both leading uppercase and lowercase letters, an "If" keyword with a captial "I" as in Microsoft BASIC, and you initialize a string unnecessarily. Please turn in your geek cred card. :-)
He's logged in with a Google+ account. He never HAD one. In fact he actually likes beta!
Re: (Score:1)
Ok, you have types with leading lowercase letters, variables with both leading uppercase and lowercase letters. with a captial "I"
Copy and paste for the win.
and you initialize a string unnecessarily.
Necessarily, i initialize the string correctly and give it a NULL value, before i go ahead and play with it.
I'd love to see the values of your variables. Wonder how many of them are uninitialized and causing havoc in your code.
Re: (Score:2)
O'Data (Score:5, Funny)
An Irish android? How appropriate!
I'm not clear.... (Score:2)
I'm not clear here, isn't that the purpose of TCP/IP?
Re: (Score:2)
TCP is for reliable in order transmission/reception of octects.
Re: (Score:2)
TCP is for reliable in order transmission/reception of octects.
...and standardizes nothing about the content of those octets, so, as you suggest, TCP, by itself, is insufficient to "[simplify] data sharing across disparate applications in enterprise, cloud, and mobile devices".
Oh the irony (Score:4, Interesting)
At the link for the specifications OData JSON Format Version 4.0 [oasis-open.org]
The documents that are tagged as Authoritative are .doc, not even .docx
Re: (Score:2, Interesting)
Oh the irony
History, not irony.
Microsoft took over OASIS in 2006 as part of their campaign to scuttle open document formats. They're still running the show there.
http://www.zdnet.com/microsoft... [zdnet.com]
Reinvention of RDF + SPARQL (Score:2)
Re: (Score:3)
SPARQL appears to be read only, and to be restricted to data in kvp or 3-tuples.
OData supports mutable entities, change and request batching, and http GET semantics for data access. It would appear to map much better to real-world databases and business use-cases.
Re: (Score:3)
Re: (Score:2)
You could be right.
OData predates SPARQL 1.1, however, and supported all CRUD operations from its inception.
Re: (Score:1)
What is OData? Why should you care? (Score:5, Informative)
OData is (now) a standard for how applications can exchange structured data, oriented towards HTTP and statelessness.
OData consumers and producers are language and platform neutral.
In contrast to something like a REST service, for which clients must be specifically authored and the discovery process is done by humans reading an API doc, ODATA specifies a URI convention and a $metadata format that means OData resources are accessed in a uniform way, and that OData endpoints can have their shape/semantics programmatically discovered.
So for instance, if you have entity named Customer hosted on http://foo.com/myOdataFeed [foo.com], I can issue an HTTP call like this:
GET http://foo.com/myODataFeed/Cus... [foo.com]
and get your customers.
furthermore, the metadata document describing your customer type will live at
foo.com/myODataFeed/$metadata ... which means I can attach to it with a tool and generate proxy code, if I like. It makes it easy to build a generic OData explorer type tool, or for programs like Excel and BI tools to understand what your data exposes.
Suppose that your Customers have have an integer primary key, (which I discovered from reading $metadata), and have a 1:N association to an ORders entity. I can therefore write this query:
GET http://foo.com/myODataFeed/Cus... [foo.com] .. and get back the Orders for just customer ID:1
I can add additional operators to the query string, like $filter or $sort, and data-optimization operators like $expand or $select.
OData allows an arbitrary web service to mimic many of the semantics of a real database, in a technology neutral way, and critically, in a way that is uniform for anonymous callers and programmatically rigorous/discoverable.
Examples of OData v3 content are available here:
http://services.odata.org/V3/N... [odata.org]
OData V4 is a breaking protocol change from V3 and prior versions, but has been accepted as a standard
And, shameless plug: If you want to consume and build OData V1/V2/V3 services easily, check out Visual Studio LightSwitch :)
Re: (Score:2)
Sounds neat but doesn't solve my JSON problems.
One project might use "customer" another "client" or "businessname". Each of these may have a "description", "overview", "synopsis" and a "type"/"kind"/"businesstype" field.
So code discovery of data doesn't work unless we have agreed to standardized field names in advance, but now there's always exceptions to look out for and name conflicts.
Now even if we know the names of every field, how do we know exactly what sort of data will be returned? A name alone is
Re: (Score:2)
I suggest you look at the $metadata document for the service I linked to.
The property names, conceptual storage types, relationship info, etc, is all in there.
I'm not sure what problem you're trying to solve, exactly.
Then use XML (Score:2)
One project might use "customer" another "client" or "businessname". Each of these may have a "description", "overview", "synopsis" and a "type"/"kind"/"businesstype" field.
So code discovery of data doesn't work unless we have agreed to standardized field names in advance
Why doesn't it work? Have a look at $metadata. You get schemas for your data. OData has full discovery. The only "standardized field name" you need to know in advance is $metadata.
... but now there's always exceptions to look out for and name conflicts.
Now even if we know the names of every field, how do we know exactly what sort of data will be returned? A name alone is nothing unless we can ensure its type, and remove all assumptions about what it can contain.
OData was originally designed for XML. JSON was added later. With XML you can (and should) use namespaces to disambiguate field names between different entities/domains.
JSOAP? (Score:1)
Microsoft. (Score:2)
Know who leads the OData brigade? Microsoft. Get your crying ready, neckbeards.
On a more serious note, OData is awesome. If you've ever tried to provide a good data query API (supporting boolean syntax arbitrary queries) via a web service it's not easy. OData does it very well.
Sure, you'll get some whining from people who don't understand it that it forces you to expose your data model to the outside world, but it does absolutely no such thing. You can, should you choose, expose a complete abstraction
Re: (Score:2)
Sold!!! (Score:1)
REST buzzword (Score:2)
Representational state transfer (REST) is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements
- from wikipedia
its a framework ?