Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source Businesses

OASIS Approves OData 4.0 Standards For an Open, Programmable Web 68

First time accepted submitter Dilettant writes "The OASIS members approved Version 4 of the OData standards, which now also feature the long requested compact JSON as primary format. OData helps "simplifying data sharing across disparate applications in enterprise, cloud, and mobile devices" through interfacing data sources via a REST like interface."
This discussion has been archived. No new comments can be posted.

OASIS Approves OData 4.0 Standards For an Open, Programmable Web

Comments Filter:
  • JSON Sucks (Score:4, Insightful)

    by Jane Q. Public ( 1010737 ) on Monday March 17, 2014 @03:33PM (#46509425)
    Look... I have to live with it in my work, okay? But it's anything but fun to work with.

    For computer-to-computer data interchange, JSON is not bad. But it's about as human-readable as the Voynich Manuscript.
    • Re: (Score:3, Insightful)

      by pigiron ( 104729 )

      You actually prefer XML???????

      • by Anonymous Coward

        Using XML is like sticking your nuts in a vice and squeezing them until they burst. Although in the end it's still more pleasant than using XML.

      • Re:JSON Sucks (Score:5, Insightful)

        by Anonymous Coward on Monday March 17, 2014 @03:52PM (#46509629)

        Absolutely. XML is much more mature. XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language. All JSON really has going for it is that it's already JavaScript.

        The funny thing is that, at the end of the day, JSON and XML are the same thing, only syntactically different. Yet the prevailing opinion seems to be that XML is absolute and total shit whereas JSON is some golden calf.

        • Re: (Score:3, Interesting)

          "The funny thing is that, at the end of the day, JSON and XML are the same thing, only syntactically different."

          Yes, exactly. But XML is readable by people. JSON is not. Just try to read any big dataset in JSON, especially if it's minified. Good luck. At least with XML you have a shot.

          Having said that: there are lots of good tools for converting from one to the other, so it could be a lot worse.

          You make a good point about standards and validation, though, too. That's why business data interchanges are generally built on XML, and not JSON. Even though JSON is generally more efficient.

          • by Anonymous Coward

            Yes, exactly. But XML is readable by people

            No it isn't. Neither of them is. Not directly. They are *both* readable by humans with a good browser/editor. Tell the editor guys to get crackin' if they haven't already. When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it. Visual Studio even does that with C for cryin' out loud. Class graph browsers for C++ have been out for like... forever. I don't work with XML or JSO

            • "No it isn't. Neither of them is. Not directly."

              Yes, it is. If you don't believe me, I have posted a link to a simple example below. Not only is the JSON harder to read, it throws away data type information unnecessarily.

              "When dealing with any format that's a hierarchy, you should be able to view the top level and click a little '+' or something to open it."

              This is old stuff. TextMate (just for one example), has been doing that for a long time. Here's an example [postimg.org]. Notice how the line numbers skip where the code is collapsed.

              You can also open XML in Firefox, and again it does exactly the same thing: you can expand and collapse levels at will.

        • XML is much more mature. XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language.

          Which means that XML will still be around in 10 years, and can safely be used today for major projects.

        • I couldn't agree with you more. I love XML far more than JSON. I don't get why so many seem to want to use JSON these days. Quite often, it's easy to glance at some XML and get an idea what to do with it, but you can't quite do that with JSON. Is there an XSLT equivalent for JSON? I haven't heard of one.
      • I didn't say that. But since you asked:

        IF the situation calls for HUMANS to read the data, I sure as hell do prefer XML. No contest. JSON is virtually unreadable.

        Like I said: it's fine for computer data interchange, but when it comes to human intervention, give me XML any day.

        I'm not claiming XML is perfect, by any stretch of the imagination. But when humans rather than computers need to deal with the data, it beats the shit out of JSON.
        • by Desler ( 1608317 )

          How is JSON hard to read? It's just lists of key/value pairs

          • Re: (Score:1, Insightful)

            by pigiron ( 104729 )

            Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read.

            • "Really. Non of that totally unnecessary tag BS inherited from a printer definition spec (of all absurd things.) And key/value pairs are a hell of a lot easier to insert into a database in addition to being easier to read."

              Key-value pairs are a tiny subset of all data types. There are many data types that they have to struggle to represent very well. And when they try, the result (to the human eye) is a huge mess.

              You're entitled to your opinion of course. But I think you're looking at it from a very narrow perspective. Have you ever actually had to program for the exchange of complex data sets? By that I mean something quite a bit more involved than a web store?

        • by pem ( 1013437 )
          If the metric is readability without special tools, why stop there?

          Neither JSON nor XML is easily writable without special tools.

          YAML attempts to be writable, but the grammar and parser are huge and slow.

          RSON [google.com] is a superset of JSON that is eminently readable/writable, and much simpler than YAML, allowing, for example, for human-maintained configuration files.

          The reference Python parser operates about as fast as the unaccelerated Python library pure JSON parser.

          • "Neither JSON nor XML is easily writable without special tools. "

            Sure they are. Take just about any object in Ruby and call [object].to_xml or [object].to_json.

            More relevant to the discussion though, I think, is what someone else said above:

            "XML has standardized schemas, validation, querying, transformation, a binary format and APIs for interoperability from any language. All JSON really has going for it is that it's already JavaScript."

            I would have to say the same for RSON.

            While it is true that they are syntactical versions of one another, XML is far less ambiguous. In a way, XML versus JSON is a lot like Java versus JavaScript. The former have more tightly defined specifications, and less ambiguity. (I.e., Java will not let you treat a string like an integer

            • by pem ( 1013437 )
              I thought it was clear from the context that readability/writability meant BY A HUMAN, not BY A PROGRAM.

              But I guess I suck at that myself, since we're obviously not communicating properly.

              Obviously there are libraries in all sorts of languages to read/write both.

      • by rsborg ( 111459 )

        You actually prefer XML???????

        Yes, as I deal in data interchange all the time, XML is great as it allows schema definition/sharing (XSD) and XSLT is a mature transformation language, that, after many years in the woods, is now available with functional capabilities (XSLT v3.0).

        The only problem we have is that often, endpoint partners/vendors don't provide the XSD, nor do they share how they plan to validate files we send them. Or they ignore our XSD. But I still can't imagine things would be better if JSON were the interchange format.

      • Does JSON support namespaces? AFAIK it doesn't, and that would seem to make it suitable only for fairly simple data interchange and not really scalable.

        As far as which is best visually... XML is a bit wordy/busy, especially if it uses a lot namespaces, but it's a pretty minor problem given that with both XML and JSON, it's a piece of piss to write a nice visual editor.

        The important thing for me is having a solid platform for building applications, and XML has the capability and maturity for that - even
        • Does JSON support namespaces? AFAIK it doesn't, and that would seem to make it suitable only for fairly simple data interchange and not really scalable. As far as which is best visually... XML is a bit wordy/busy, especially if it uses a lot namespaces, but it's a pretty minor problem given that with both XML and JSON, it's a piece of piss to write a nice visual editor. The important thing for me is having a solid platform for building applications, and XML has the capability and maturity for that - even if it is a bit ugly!

          I know it's bad-form replying to my own post, but it does appear that there is some kind of namespacing going on in the OData spec [oasis-open.org]. Does anyone know if this namespacing is part of the JSON standard, or is it just a convention that OASIS are using?

          Eitherway, I still prefer XML! :D

    • like á Ruby Thiên Nhiên [rubythiennhien.net]
  • by slashmydots ( 2189826 ) on Monday March 17, 2014 @03:35PM (#46509445)
    If (PlatformIndepenentProgrammingRelated == True) And (RelatedToJava == False) {
    Good!!!
    }

    They cracked the code on good web programming standards lol.
    • If (PlatformIndepenentProgrammingRelated == True) And (RelatedToJava == False) {

      Good!!!

      }
      They cracked the code on good web programming standards lol.

      string message = "";
      If (PlatformIndepenentProgrammingRelated &&
              !RelatedToJava)
      {
            message = "Good!!";
      }

      • You're missing something...the fact that I mixed 2 languages together to purposely make it not a real language.
  • O'Data (Score:5, Funny)

    by Megahard ( 1053072 ) on Monday March 17, 2014 @03:47PM (#46509575)

    An Irish android? How appropriate!

  • I'm not clear here, isn't that the purpose of TCP/IP?

    • TCP is for reliable in order transmission/reception of octects.

      • TCP is for reliable in order transmission/reception of octects.

        ...and standardizes nothing about the content of those octets, so, as you suggest, TCP, by itself, is insufficient to "[simplify] data sharing across disparate applications in enterprise, cloud, and mobile devices".

  • Oh the irony (Score:4, Interesting)

    by OzPeter ( 195038 ) on Monday March 17, 2014 @03:52PM (#46509635)

    At the link for the specifications OData JSON Format Version 4.0 [oasis-open.org]

    The documents that are tagged as Authoritative are .doc, not even .docx

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Oh the irony

      History, not irony.

      Microsoft took over OASIS in 2006 as part of their campaign to scuttle open document formats. They're still running the show there.

      http://www.zdnet.com/microsoft... [zdnet.com]

  • Glancing over the specification it looks like a reincarnation of RDF plus SPARQL for updates. Perhaps a product of Not Invented Here syndrome? I am sure it will end up like most OASIS standards: developed in a bubble by company insiders, introduced as selling points in the next versions of said companies products, rejected by the marketplace due to complexity and lack of adoption, and then ultimately discarded in favor of the next technology fad that purportedly better solves the problem space.
    • by bmajik ( 96670 )

      SPARQL appears to be read only, and to be restricted to data in kvp or 3-tuples.

      OData supports mutable entities, change and request batching, and http GET semantics for data access. It would appear to map much better to real-world databases and business use-cases.

      • SPARQL 1.1 supports updates (insert/delete) and the SPARQL CONSTRUCT operator can be used to build query results in a nested graph format. Additionally SPARQL protocol defines a standard HTTP binding protocol [w3.org] that can generate output in CSV and JSON formats in addition to XML. To me it appears OData is a reimagining of W3C's Semantic Web efforts.
        • by bmajik ( 96670 )

          You could be right.

          OData predates SPARQL 1.1, however, and supported all CRUD operations from its inception.

    • OData has been around for ages, it's not new. It's a standard for passing rich, structured queries over HTTP, that's it.
  • by bmajik ( 96670 ) <matt@mattevans.org> on Monday March 17, 2014 @04:11PM (#46509881) Homepage Journal

    OData is (now) a standard for how applications can exchange structured data, oriented towards HTTP and statelessness.

    OData consumers and producers are language and platform neutral.

    In contrast to something like a REST service, for which clients must be specifically authored and the discovery process is done by humans reading an API doc, ODATA specifies a URI convention and a $metadata format that means OData resources are accessed in a uniform way, and that OData endpoints can have their shape/semantics programmatically discovered.

    So for instance, if you have entity named Customer hosted on http://foo.com/myOdataFeed [foo.com], I can issue an HTTP call like this:

    GET http://foo.com/myODataFeed/Cus... [foo.com]

    and get your customers.

    furthermore, the metadata document describing your customer type will live at

    foo.com/myODataFeed/$metadata ... which means I can attach to it with a tool and generate proxy code, if I like. It makes it easy to build a generic OData explorer type tool, or for programs like Excel and BI tools to understand what your data exposes.

    Suppose that your Customers have have an integer primary key, (which I discovered from reading $metadata), and have a 1:N association to an ORders entity. I can therefore write this query:

    GET http://foo.com/myODataFeed/Cus... [foo.com] .. and get back the Orders for just customer ID:1

    I can add additional operators to the query string, like $filter or $sort, and data-optimization operators like $expand or $select.

    OData allows an arbitrary web service to mimic many of the semantics of a real database, in a technology neutral way, and critically, in a way that is uniform for anonymous callers and programmatically rigorous/discoverable.

    Examples of OData v3 content are available here:

    http://services.odata.org/V3/N... [odata.org]

    OData V4 is a breaking protocol change from V3 and prior versions, but has been accepted as a standard

    And, shameless plug: If you want to consume and build OData V1/V2/V3 services easily, check out Visual Studio LightSwitch :)

    • by CODiNE ( 27417 )

      Sounds neat but doesn't solve my JSON problems.

      One project might use "customer" another "client" or "businessname". Each of these may have a "description", "overview", "synopsis" and a "type"/"kind"/"businesstype" field.

      So code discovery of data doesn't work unless we have agreed to standardized field names in advance, but now there's always exceptions to look out for and name conflicts.

      Now even if we know the names of every field, how do we know exactly what sort of data will be returned? A name alone is

      • by bmajik ( 96670 )

        I suggest you look at the $metadata document for the service I linked to.

        The property names, conceptual storage types, relationship info, etc, is all in there.

        I'm not sure what problem you're trying to solve, exactly.

      • One project might use "customer" another "client" or "businessname". Each of these may have a "description", "overview", "synopsis" and a "type"/"kind"/"businesstype" field.

        So code discovery of data doesn't work unless we have agreed to standardized field names in advance

        Why doesn't it work? Have a look at $metadata. You get schemas for your data. OData has full discovery. The only "standardized field name" you need to know in advance is $metadata.

        ... but now there's always exceptions to look out for and name conflicts.

        Now even if we know the names of every field, how do we know exactly what sort of data will be returned? A name alone is nothing unless we can ensure its type, and remove all assumptions about what it can contain.

        OData was originally designed for XML. JSON was added later. With XML you can (and should) use namespaces to disambiguate field names between different entities/domains.

    • This is sound eerily familiar...
  • Know who leads the OData brigade? Microsoft. Get your crying ready, neckbeards.

    On a more serious note, OData is awesome. If you've ever tried to provide a good data query API (supporting boolean syntax arbitrary queries) via a web service it's not easy. OData does it very well.

    Sure, you'll get some whining from people who don't understand it that it forces you to expose your data model to the outside world, but it does absolutely no such thing. You can, should you choose, expose a complete abstraction

  • I saw that it's embraced by Blackberry... Nice name-drop! I'm in!
  • Representational state transfer (REST) is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements

    - from wikipedia

    its a framework ?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...