From XML to JSON to CBOR

4 days ago (cborbook.com)

Feels like a CBOR ad to me. I agree that most techs are familiar with XML and JSON, but calling CBOR a "pivotal data format" is a stretch compared to Protobuf, Parquet, Avro, Cap'n Proto, and many others: https://en.m.wikipedia.org/wiki/Comparison_of_data-serializa...

  • The fact that the long article misses to make the historical/continuation link to MessagePack is by itself a red flag signalling a CBOR ad.

    Edit: OK, actually there is a separate page for alternatives: https://cborbook.com/introduction/cbor_vs_the_other_guys.htm...

    • Notably missing is a comparison to Cap'n Proto, which to me feels like the best set of tradeoffs for more binary interchange needs.

      I honestly wonder sometimes if it's held back by the name— I love the campiness of it, but I feel like it could be a barrier to being taken seriously in some environments.

      6 replies →

  • Have to agree. I've heard of every format you mentioned, but never heard of CBOR.

    • I first heard of it while developing a QR code travel passport during the Covid era... the technical specification included CBOR as part of the implementation requirement. Past this, I have not crossed path with it again...

  • CBOR is just a standard data format. Why would it need an ad? What are they selling here?

    • A lot of people (myself included) are working on tools and protocols that interoperate via CBOR. Nobody is selling CBOR itself, but I for one have a vested interest in promoting CBOR adoption (which makes it sound nefarious but in reality I just think it's a neat format, when you add canonicalization).

      CBOR isn't special here, similar incentives could apply to just about any format - but JSON for example is already so ubiquitous that nobody needs to promote it.

    • If I adopt a technology, I probably don't want it to die out. Widespread support is generally good for all that use it.

  • I would agree their claim is a bit early, but I think a key difference between those you mentioned and CBOR is the stability expectation. Protobuf/Parquet/etc are usually single-source libraries/frameworks, which can be changed quite quickly, while CBOR seems to be going for a spec-first approach.

Love or hate JSON, the beauty and utility stem from the fact that you have only the fundamental datatypes as a requirement, and that's it.

Structured data that, by nesting, pleases the human eye, reduced to the max in a key-value fashion, pure minimalism.

And while you have to write type converters all the time for datetime, BLOBs etc., these converters are the real reasons why JSON is so useful: every OS or framework provides the heavy lifting for it.

So any elaborated new silver bullet would require solving the converter/mapper problem, which it can't.

And you can complain or explain with JSON: "Comments not a feature?! WTF!" - Add a field with the key "comment"

Some smart guys went the extra mile and nevertheless demanded more, because wouldn't it be nice to have some sort of "strict JSON"? JSON schema was born.

And here you can visibly experience the inner conflict of "on the one hand" vs "on the other hand". Applying schemas to JSON is a good cause and reasonable, but guess what happens to JSON? It looks like unreadable bloat, which means XML.

Extensibility is fine, basic operations appeal to both demands, simple and sophisticated, and don't impose the sophistication on you just for a simple 3-field exchange about dog food preferences.

  • My complaint about JSON is that it’s not minimal enough. The receiver always has to validate anyway, so what has syntax typing done for us? Different implementations of JSON disagree about what constitutes a valid value. For instance, is

        {“x”: NaN}
    

    valid JSON? How about 9007199254740993? Or -.053? If so, will that text round trip through your JSON library without loss of precision? Is that desirable if it does?

    Basically I think formats with syntax typed primitives always run into this problem: even if the encoder and decoder are consistent with each other about what the values are, the receiver still has to decide whether it can use the result. This after all is the main benefit of a library like Pydantic. But if we’re doing all this work to make sure the object is correct, we know what the value types are supposed to be on the receiving end, so why are we making a needlessly complex decoder guess for us?

    • NaN is not a valid value in JSON. Neither are 0123 or .123 (there must always be at least one digit before the decimal marker, but extraneous leading zeroes are disallowed).

      JSON was originally parsed in javascript with eval() which allowed many things that aren't JSON through, but that doesn't make JSON more complex.

      7 replies →

  • > you have only the fundamental datatypes as a requirement

    Not really; the set of datatypes has problems. It uses Unicode, not binary data and not non-Unicode text. Numbers are usually interpreted as floating point numbers rather than integers, which can also be a problem. Keys can only be strings. And, other problems. So, the data types are not very good.

    And, since it is a text format, it means that escaping is required.

    > And while you have to write type converters all the time for datetime, BLOBs etc.

    Not having a proper data type for binary means that you will need to encode it using different types and then avoids the benefit of JSON, anyways. So, I think JSON is not as helpful.

    I think DER is better (you do not have to use all of the types; only the types that you are using is necessary to be implemented, because the format of DER makes it possible to skip anything that you do not care about), and I made up TER which is text based format which can be converted to DER (so, even though a binary data is represented as text, it is still representing the binary data type, rather than needing to use the wrong data type like JSON does).

    > And you can complain or explain with JSON: "Comments not a feature?! WTF!" - Add a field with the key "comment"

    But then it is a part of the data, which you might not want.

  • CBOR (and MsgPack) still embraces that simplicity. It provides the same types of key-value, lists, and basic values.

    However the types are more precise allowing you to differentiate between int32’s or int64’s or between strings or bytes.

    Essentially you can replace json with it and gain performance, less ambiguity but with the same flexibility. You do need a step to print CBOR in human readable form, but it has a standardized human readable form similar to a typed json.

Odd that the XML and JSON sections show examples of the format, but CBOR doesn’t. I’m left with no idea what it looks like, other than “building on JSON’s key/value format”.

ASN.1 while complex has really seems to be a step up from those (even if older) in terms of terseness (as binary encoding) and generality.

  • Would you rather write a parser for this:

        SEQUENCE {
          SEQUENCE {
            OBJECT IDENTIFIER '1 2 840 113549 1 1 1'
            NULL
            }
          BIT STRING 0 unused bits, encapsulates {
              SEQUENCE {
                INTEGER
                  00 EB 11 E7 B4 46 2E 09 BB 3F 90 7E 25 98 BA 2F
                  C4 F5 41 92 5D AB BF D8 FF 0B 8E 74 C3 F1 5E 14
                  9E 7F B6 14 06 55 18 4D E4 2F 6D DB CD EA 14 2D
                  8B F8 3D E9 5E 07 78 1F 98 98 83 24 E2 94 DC DB
                  39 2F 82 89 01 45 07 8C 5C 03 79 BB 74 34 FF AC
                  04 AD 15 29 E4 C0 4C BD 98 AF F4 B7 6D 3F F1 87
                  2F B5 C6 D8 F8 46 47 55 ED F5 71 4E 7E 7A 2D BE
                  2E 75 49 F0 BB 12 B8 57 96 F9 3D D3 8A 8F FF 97
                  73
                INTEGER 65537
                }
              }
          }
    

    or this:

        (public-key
          (rsa
            (e 65537)
            (n
             165071726774300746220448927123206364028774814791758998398858897954156302007761692873754545479643969345816518330759318956949640997453881810518810470402537189804357876129675511237354284731082047260695951082386841026898616038200651610616199959087780217655249147161066729973643243611871694748249209548180369151859)))
    

    I know that I’d prefer the latter. Yes, we could debate whether the big integer should be a Base64-encoded binary integer or not, but regardless writing a parser for the former is significantly more work.

    And let’s not even get started with DER/BER/PEM and all that insanity. Just give me text!

    • The ASN.1 notation wasn't meant for parsing. And then people started writing parsing generators for it, so they adapted. However, you're abusing a text format for human reading and pretending it's a serialization format.

      The BER/PER are binary formats and great where binary formats are needed. You also have XER (XML) and JER (JSON) if you want text. You can create an s-expr encoding if you want.

      Separate ASN.1--the data model from ASN.1--the abstract syntax notation (what you wrote) from ASN.1's encoding formats.

      [1] https://www.itu.int/en/ITU-T/asn1/Pages/asn1_project.aspx

      3 replies →

    • That is a text format, although DER is a binary format and encodes the data which there is represented by text. I think they should not have made a bit string (or octet string) to encapsulate another ASN.1 data and would be better to put it directly, but nevertheless it can work. The actual data to be parsed will be binary, not the text format like that.

      DER is a more restricted variant of BER and I think DER is better than BER. PEM is also DER format but is encoded as base64 and has a header to indicate what type of data is being stored, rather than directly.

  • Yes, but that comes from the telecom world. Hence thanks to NIH, that wheel must be reinvented.

  • The FOSS tooling for it sucks balls. That's why

    • Then, work to make a better one. (I had written a C library to read/write DER format, although it does not deal with the schema.)

Fun fact: CBOR is used within the WebAuthn (Passkey) protocol.

To do Passkey-verification server-side, I had to implement a pure-SQL/PLpgSQL CBOR parser, out of fear that a C-implementation could crash the PostgreSQL server: https://github.com/truthly/pg-cbor

CBOR is when you need option for very small code size. If you can always use compression, CBOR provides no significant data size improvement over JSON.

With small code size it beats also BSON, EBML and others.

  • Or compute. Compression isn't free, especially on power constrained devices. At scale power and compute also have real cost implications. Most data centers have long been using binary encoding formats such as protobuf to save on compute and network bandwidth. cbor is nice because it's self describing so you can still understand it without a schema, which is a nice property people like about json.

This is a link to just one section of a larger book. The next section compare CBOR with a number of other binary storage format, such as protobuf.

I admit I got nerd-sniped here, but the table for floats[1] suggests that 10000.0 be represented as a float32. However, isn't it exactly representable as 0x70e2 in float16[2]? There are only 10 significant bits to the mantissa (including the implicit 1), while float16 has 11 so there's even an extra bit to spare.

1: https://cborbook.com/part_1/practical_introduction_to_cbor.h...

2: i.e. 1.220703125×2¹³

  • Looks like it's a typo; they state:

    > 0x47c35000 encodes 10000.0

    But by my math that encodes 100000.0 (note the extra zero).

I prefer DER, which is also a binary format so it has the advantages of binary formats, too. (There is also BER, but in my opinion, DER is better.) I use DER in some programs, if the structured data format is useful. (Also, since text format is sometimes useful too, I had made up TER which is intended to be converted to DER. The DER file can be made in other ways as well and it is not required to use TER.)

(Also, standard ASN.1 does not have a key/value list type (which JSON and CBOR do have), but I had made up some nonstandard extensions to ASN.1 (called ASN.1X), including a few additional types, one of which is the key/value list type. Due to this, ASN.1X can now make a superset of the data that can be made by JSON (the only new type that is needed for this is the key/value list type; the other types of JSON are already standard ASN.1 types).)

I wish browsers would support CBOR natively so I could just return CBOR instead of JSON(++speed --size ==win) and not have to be concerned with decoding it or not being able to debug requests in dev console.

  • JSON + compression (++speed --size ==win)

    your server can do this natively for live data. your browser can decompress natively. and ++human-readable. if you're one of those that doesn't want the user to read the data, then maybe CBOR is attractive??? but why would you send data down the wire that you don't want the user to see? isn't the point of sending the data to the client is so the client can display that data?

    • That is true. Basic content encoding works very well with json but that still means there is the compression step, which would not be necessary with CBOR as it is already a binary payload. It would allow faster response and delivery times natively. Of course, we are talking few ms, but I say why leave those ms on the floor?

      I guess i'm just shouting at the clouds :D

The only mention I can see in this document of compression is

> Significantly smaller than JSON without complex compression

Although compression of JSON could be considered complex, it's also extremely simple in that it's widely used and usually performed in a distinct step - often transparently to a user. Gzip, and increasingly zstd are widely used.

I'd be interested to see a comparison between compressed JSON and CBOR, I'm quite surprised that this hasn't been included.

  • > I'm quite surprised that this hasn't been included.

    Why? That goes against the narrative of promoting one over the other. Nissan doesn't advertise that a Toyota has something they don't. They just pretend it doesn't exist.

Erlang / Elixir has amazing support for ASN.1! I love it.

https://www.erlang.org/doc/apps/asn1/asn1_getting_started.ht...

https://www2.erlang.org/documentation/doc-14/lib/asn1-5.1/do... (https://www2.erlang.org/documentation/doc-14/lib/asn1-5.1/do...)

I am using ASN.1 to communicate between a client (Java / Kotlin) and server (Erlang / Elixir), but unfortunately Java / Kotlin has somewhat of a shitty support for ASN.1 in comparison to Erlang.

Oh good, another CBOR thread. Disclaimer: I wrote and maintain a MessagePack implementation. I've also bird dogged this for a while, HN search me.

Mostly, I just want to offer a gentle critique of this book's comparison with MessagePack [0].

> Encoding Details: CBOR supports indefinite-length arrays and maps (beneficial for streaming when total size is unknown), while MessagePack typically requires fixed collection counts.

This refers to CBOR's indefinite length types, but awkwardly, streaming is a protocol level feature, not a data format level feature. As a result, there's many better options, ranging from "use HTTP" to "simply send more than 1 message". Crucially, CBOR provides no facility for re-syncing a stream in the event of an error, whether that's network or simply a bad encoding. "More features" is not necessarily better.

> Standardization: CBOR is a formal IETF standard (RFC 8949) developed through consensus, whereas MessagePack uses a community-maintained specification. Many view CBOR as a more rigorous standard inspired by MessagePack.

Well, CBOR is MessagePack. Carsten Bormann forked MessagePack, changed some of the tag values, wrote a standard around it, and submitted it to the IETF against the wishes of MessagePack's creators.

> Extensibility: CBOR employs a standardized semantic tag system with an IANA registry for extended types (dates, URIs, bignums). MessagePack uses a simpler but less structured ext type where applications define tag meanings.

Warning: I have a big rant about the tag registry.

The facilities are the same (well, the tag is 8 bytes instead of 1 byte, but w/e); it's TLV all the way down (Bormann ripped this also). Bormann's contribution is the registry, which is bonkers [1]. There's... dozens of extensions there? Hundreds? No CBOR implementation supports anywhere near all this stuff. "Universal Geographical Area Description (GAD) description of velocity"? "ur:request, Transaction Request identifier"?

The registry isn't useful. Here are the possible scenarios:

If something is in high demand and has good support across platforms, then it's a no-brainer to reserve a tag. MP does this with timestamps.

If something is in high demand, but doesn't have good support across platforms, then you're putting extra burden on those platforms. Ex: it's not great if my tiny microcontroller now has to support bignums or 128-bit UUIDs. Maybe you do that, or you make them optional, but that leads us to...

If something isn't in high demand or can't easily be supported across platforms, but you want support for it anyway, there's no need to tell anyone else you're using that thing. You can just use it. That's MP's ext types.

CBOR seems to imagine that there's a hypothetical general purpose decoder out there that you can point to any CBOR API, but there isn't and there never will be. Nothing will support both "Used to mark pointers in PSA Crypto API IPC implementation" and "PlatformV_HAS_PROPERTY" (I just cannot get over this stuff). There is no world where you tell the IETF about your tags, define an API with them, and someone completely independently builds a decoder for them. It will always be a person who cares about your specific tags, in which case, why not just agree on the ext types ahead of time? A COSE decoder doesn't need also need to decode a "RAINS Message".

> Performance and Size: Comparisons vary by implementation and data. CBOR prioritizes small codec size (for constrained devices) alongside message compactness, while MessagePack focuses primarily on message size and speed.

I can't say I fully understand what this means, but CBOR and MP are equivalent here, because CBOR is MP.

> Conceptual Simplicity: MessagePack's shorter specification appears simpler, but CBOR's unification of types under its major type/additional info system and tag mechanism offers conceptual clarity.

Even if there's some subjectivity around "conceptual simplicity/clarity", again CBOR and MP are equivalent here because they're functionally the same format.

---

I have some notes about the blurb above too:

> MessagePack delivers greater efficiency than JSON

I think it's probably true that the fastest JSON encoders/decoders are faster than the fastest MP encoders/decoders. Not that JSON performance has a higher ceiling, but it's got gazillions of engineering hours poured into it, and rightly so. JSON is also usually compressed, so space benefits only matter at the perimeters. I'm not saying there's no case for MP/CBOR/etc., just that the efficienty/etc. gap is a lot smaller than one would predict.

> However, MessagePack sacrifices human-readability

This, of course, applies to CBOR as well.

> ext mechanism provides less structure than CBOR's IANA-registered tags

Again the mechanism is the same, only the registry is different.

[0]: https://cborbook.com/introduction/cbor_vs_the_other_guys.htm...

[1]: https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml

  • > ...awkwardly, streaming is a protocol level feature, not a data format level feature.

    Indeed. I recall that tnetstrings were intentionally made non-streamable to discourage people from trying to do so: "If you need to send 1000 DVDs, don't try to encode them in 1 tnetstring payload, instead send them as a sequence of tnetstrings as payload chunks with checks and headers like most other protocols"

    > Warning: I have a big rant about the tag registry. > ...

    I completely agree with your rant w.r.t. automated decoding. However, a global tag registry can still potentially be useful in that, given CBOR encoded data with a tag that my decoder doesn't support, it may be easier for a human to infer the intended meaning. Some types may be very obvious, others less so.

    e.g. Standardized MIME types are useful even if no application supports every one of them.

    • > However, a global tag registry can still potentially be useful in that, given CBOR encoded data with a tag that my decoder doesn't support, it may be easier for a human to infer the intended meaning.

      Yeah if MP is conservative and CBOR is progressive, I'm slightly less conservative than MP: I'd support UUIDs and bignums. But again, they'd have to be very optional, like in the "we're only reserving these tags, not in any way mandating support" sense.

  • > This refers to CBOR's indefinite length types, but awkwardly, streaming is a protocol level feature, not a data format level feature.

    BER also has indefinite length as well as definite length, but the way that it is doing, is not very good (DER only uses definite length). I think it is more helpful to use a different format when streaming with indefinite length is require, so I made up DSER (and SDSER) which is working as follows:

    - The type, which is encoded same as DER.

    - If it is constructed, all items it contains come next (the length is omitted).

    - If it is primitive, zero or more segments, each of which starts with one byte in range 0x01 to 0xFF telling how many bytes of data are in that segment. (The value is then just the concatenation of all segments together.)

    - For both primitive and constructed, one byte with value 0x00 is the termination code.

    > Bormann's contribution is the registry, which is bonkers [1]. There's... dozens of extensions there? Hundreds? No CBOR implementation supports anywhere near all this stuff.

    It should not need to support all of that stuff; you will only use the ones that are relevant for your program. (There is also the similar kind of complaint with ASN.1, and the similar response that I had made.)

    > If something is in high demand, but doesn't have good support across platforms, then you're putting extra burden on those platforms. Ex: it's not great if my tiny microcontroller now has to support bignums or 128-bit UUIDs.

    Although it is a valid concern, you would use data which does not have numbers bigger than you need to be, so it can avoid such a problem. You can treat UUIDs like octet strings, although if you only need small numbers then you should use the small numbers types instead, anyways.

    > If something isn't in high demand or can't easily be supported across platforms, but you want support for it anyway, there's no need to tell anyone else you're using that thing.

    Sometimes it is useful to tell someone else that you are using that thing, although often it is unnecessary, like you said.

  • > Well, CBOR is MessagePack. Carsten Bormann forked MessagePack

    Sure, that’s sort of true but missing context. Bormann (and others) wanted to add things such as separate string and byte sequence types. The MessagePack creator refused for years. Fair enough it’s his format. But it frustrated the community dealing with string vs bytes issues. It also highlights a core philosophical difference of a mostly closed spec vs an extensible first one.

    > changed some of the tag values, wrote a standard around it, and submitted it to the IETF against the wishes of MessagePack's creators.

    That’s just incorrect and a childish way to view it in my opinion.

    The core philosophy and mental models are different in key aspects.

    MessagePack is designed as a small self mostly closed format. It uses a simple TLV format with a couple hundred possible user extensions and some clever optimizations. The MP “spec” focuses on this.

    CBOR re-envisioned the core idea of MessagePack from the ground up as an extensible major/minor tag system. It’s debatable how much CBOR is a fork of MPack vs a new format with similarities.

    The resulting binary output is pretty similar with similar benefits but the core theoretical models are pretty different. The IETF standard bares little to no resemblance to the MessagePack specification.

    > The facilities are the same (well, the tag is 8 bytes instead of 1 byte, but w/e); it's TLV all the way down (Bormann ripped this also).

    The whole point of CBOR is that the tags go from 1-8 bytes. The parser designs end up fairly different due to the different tag formats. I’ve written and ported parsers for both.

    It’s not like the MessagePack creator invented TLV formats either. He just created an efficient and elegant one that’s pretty general. No one says he ripped off “TLV”.

    You can’t just take a message pack parser and turn it into a CBOR one by changing some values. I’ve tried and it turns out poorly and doesn't support much of CBOR.

    > This refers to CBOR's indefinite length types, but awkwardly, streaming is a protocol level feature, not a data format level feature.

    The indefinite length format is very useful for embedded space. I’ve hit limits with MessagePack before on embedded projects because you need to know the length of an array upfront. I wished I’d had CBOR instead.

    This can also be useful for data processing applications. For example streaming the conversion of a large XML file into a more concise CBOR format would be much more memory efficient. For large scale that’s pretty handy.

    > > However, MessagePack sacrifices human-readability > This, of course, applies to CBOR as well.

    For the binary format yes. However the CBOR specification defines an official human readable text format for debugging and documentation purposes. It also defines a schema system like json-schema but for CBOR.

    Turns out “just some specs” can actually be pretty valuable.

    • I am really glad you replied.

      > Sure, that’s sort of true but missing context. Bormann (and others) wanted to add things such as separate string and byte sequence types. The MessagePack creator refused for years. Fair enough it’s his format. But it frustrated the community dealing with string vs bytes issues.

      msgpack-ruby added string support less than a month after cbor-ruby's first commit [0] [1]. The spec was updated over two months before [2]. Awful lot of work if this were really just about strings.

      > It also highlights a core philosophical difference of a mostly closed spec vs an extensible first one.

      MP has been always been extensible, via ext types.

      > That’s just incorrect

      I am entirely correct [3].

      > MessagePack is designed as a small self mostly closed format.

      Isn't it a lot of effort to get an IETF standard changed? Isn't that the benefit of a standard? You keep saying "mostly closed" like it's bad. Data format standards in particular really shouldn't change: who knows how many zettagottabytes there are stored in previous versions?

      > It’s debatable how much CBOR is a fork of MPack vs a new format with similarities.

      cbor-ruby is literally a fork of msgpack-ruby. The initial commit [0] contains headers like:

          /\*
           \* CBOR for Ruby
           \*
           \* Copyright (C) 2013 Carsten Bormann
           \*
           \*    Licensed under the Apache License, Version 2.0 (the "License").
           \*
           \* Based on:
           \*\*\*\*\**/
          /*
           \* MessagePack for Ruby
           \*
           \* Copyright (C) 2008-2013 Sadayuki Furuhashi
      

      > The resulting binary output is pretty similar with similar benefits

      This is the whole game isn't it? The binary output is pretty similar? These are binary output formats!

      > but the core theoretical models are pretty different.

      I think you're giving a little too much credence to the "theoretical model". It's not more elegant to do what cbor-ruby does [4] vs. what MP does [5] (this is my lib). I literally just use the tag value, or for fixed values I OR them together. The format is designed for you to do this. What's more elegant than a simple, predefined value?

      > The whole point of CBOR is that the tags go from 1-8 bytes.

      The tags themselves are only 1 byte, until you get to extension types.

      > The parser designs end up fairly different due to the different tag formats.

      The creator of CBOR disagrees: cbor-ruby was a fork of msgpack-ruby with the tag values changed.

      > No one says he ripped off “TLV”.

      Don't conflate the general approach with literally forking an existing project.

      > You can’t just take a message pack parser and turn it into a CBOR one by changing some values.

      This is a strawman. My claim has been about the origins of CBOR, not how one can transmute an MP codec to a CBOR codec.

      > I’ve hit limits with MessagePack before on embedded projects because you need to know the length of an array upfront.

      When everything's fine, sure this works. If there are any problems whatsoever, you're totally screwed. Any protocol that supports streaming handles this kind of thing. CBOR doesn't. That's bad!

      > For example streaming the conversion of a large XML file into a more concise CBOR format would be much more memory efficient.

      It's probably faster to feed it through zstd. Also I think you underestimate how involved it'd be to round-trip a rich XML document to/from CBOR/MP.

      > However the CBOR specification defines an official human readable text format for debugging and documentation purposes.

      Where? Are you talking about Diagnostic Notation [6]? Hmm:

      "Note that this truly is a diagnostic format; it is not meant to be parsed. Therefore, no formal definition (as in ABNF) is given in this document. (Implementers looking for a text-based format for representing CBOR data items in configuration files may also want to consider YAML [YAML].)"

      YAML!? Anyway, it literally doesn't define it.

      [0]: https://github.com/msgpack/msgpack-ruby/commit/60e846aaaa638...

      [1]: https://github.com/cabo/cbor-ruby/commit/5aebd764c3a92d40592...

      [2]: https://github.com/msgpack/msgpack/commit/5dde8c4fd0010e1435...

      [3]: https://github.com/msgpack/msgpack/issues/129#issuecomment-1...

      [4]: https://github.com/cabo/cbor-ruby/blob/5aebd764c3a92d4059236...

      [5]: https://github.com/camgunz/cmp/blob/master/cmp.c#L30

      [6]: https://www.rfc-editor.org/rfc/rfc8949.html#name-diagnostic-...

people are just straight up afraid to write their own binary formats, aren't they.

it's not hard, it's exactly like creating your own text format but you write binary data instead of text, and you can't read it with your eyes right away (but you can after you've looked at enough of it.) there is nothing to fear or to even worry about; just try it. look up how things like TLV work on wikipedia. you can do just about anything you would ever need with plain binary TLV and it's gonna perform like you wouldn't believe.

https://en.wikipedia.org/wiki/Type%E2%80%93length%E2%80%93va...

binary formats are always going to be 1-2 orders of magnitude faster than plain text formats, no matter which plain text format you're using. writing a viewer so you can easily read the data isn't zero-effort like it is for JSON or XML where any existing text editor will do, but it's not exactly hard, either. your binary format reading code is the core of what that viewer would be.

once you write and use your own binary format, existing binary formats you come across become a lot less opaque, and it starts to feel like you're developing a mild superpower.

  • CBOR has some stuff that is nice but would be annoying to reimplement. Like using more bytes to store large numbers than small ones. If you need a quick multipurpose binary format, CBOR is pretty good. The only alternative I’d make manually is just memcpy the bytes of a C struct directly to disk and hope that I won’t encounter a system with different endianness.

    • These days you don't have to worry about endianness much (unless you dealing with raw network packets). However, you do need to worry about byte-padding. Different compilers/systems will place byte padding between items in your struct differently (depending on the contents and ordering of items), and if you are not careful the in-memory or on-disk placement of struct data elements can be misaligned on different systems. Most systems align to a 8-byte boundary, but that isn't guaranteed.

      1 reply →

  • I assume you mean as an exercise? Not for actual use in any production system?

    If you did mean for production use, I assume you also implement your own encryption, encoding schemes and everything else?

    • i write my own binary formats because they're fast and small. yes, in production. partly because it's just as easy as anything else for me now, partly because it doesn't require any dependencies at all, and partly to show others just how easy it is, because i think people are unnecessarily afraid of this.

      no i don't write my own encoding or encryption.

      why the hell would anyone use json for everything, and why would someone who doesn't do that earn your derision?

      2 replies →

The article reads like a semi-slop with its numerous lists and overly long explanations of obvious things, such as how XML came to be.

How different is CBOR compared to BSON? Both seem to be binary json-like representations.

Edit: BSON seems to contain more data types than JSON, and as such it is more complex, whereas CBOR doesn't add to JSON's existing structure.