Create a schema object to encode/decode your JSON in to a compact byte buffer with no overhead.

Overview

schemapack

The fastest and smallest JavaScript object serialization library. Efficiently encode your objects in to compact byte buffers and then decode them back in to objects on the receiver. Integrates very well with WebSockets.

Example

// On both the client and server:
var sp = require('./schemapack');

var playerSchema = sp.build({
    health: "varuint",
    jumping: "boolean",
    position: [ "int16" ],
    attributes: { str: 'uint8', agi: 'uint8', int: 'uint8' }
});

// On the client:
var player = {
    health: 4000,
    jumping: false,
    position: [ -540, 343, 1201 ],
    attributes: { str: 87, agi: 42, int: 22 }
};

var buffer = playerSchema.encode(player);
socket.emit('player-message', buffer); // Use some JavaScript WebSocket library to get this socket variable.

// On the server:
socket.on('player-message', function(buffer) {
    var player = playerSchema.decode(buffer);
}

In this example, the size of payload is only 13 bytes. Using JSON.stringify instead causes the payload to be 100 bytes.

If you can't emit message strings and can only send array buffers by themselves, add something like __message: "uint8" to the start of all your schemas/objects. On the receiver you can just read the first byte of the buffer to determine what message it is.

Motivation

I was working on an app that used WebSockets to talk between client and server. Usually when doing this the client and server just send JSON back and forth. However, when receiving a message the receiver already knows what the format of the message is going to be. Example:

// Client:
var message = { 'sender': 'John', 'contents': 'hi' };
socket.emit('chat', message);

// Server
socket.on('chat', function(message) {
    // We know message is going to be an object with 'sender' and 'contents' keys
});

The problems I had with sending JSON back and forth between client and server:

  • It's a complete waste of bandwidth to send all those keys and delimiters when the object format is known.
  • Even though JSON.stringify and JSON.parse are optimized native functions, they're slower than buffers.
  • There's no implicit central message repository where I can look at the format of all my different packets.
  • There's no validation so there's potential to have silent errors when accidentally sending the wrong message.

Why I didn't just use an existing schema packing library:

  • Too complicated: I didn't want to have to learn a schema language and format a schema for every object.
  • Too slow: I benchmarked a couple of other popular libraries and they were often 10x slower than using the native JSON.stringify and JSON.parse. This library is faster than even those native methods.
  • Too large: I didn't want to use a behemoth library with tens of thousands of lines of code and many dependencies for something so simple. This library is 400 lines of code with no dependencies.
  • Too much overhead: Some of the other libraries that allow you to specify a schema still waste a lot of bytes on padding/keys/etc. I desgined this library to not waste a single byte on anything that isn't your data.

Why not just use gzip compression?

  • Bandwidth usage: If you gzip the player example at the top, the payload will actually increase in size. Thus, many engines don't gzip small packets. Compression works best with large payloads with repetition.
  • Memory usage: It is common for compression to use an additional 300 kilobytes per connection.
  • CPU usage: Per-message-deflate can increase encoding times by 5-10x with small payloads (~2x with large).
  • You still can: Using gzip and SchemaPack is not mutually exclusive. You can still use gzip on the array buffers.

Benchmarks

These were performed via encoding/decoding the player object at the start of this page many times with an i7 3770k on Windows 7.

Size Speed

Here's a screencap of the new benchmark.js console output

In addition, SchemaPack really shines when used with large objects with a lot of nesting and long arrays compared to the competition. I encourage you to run the benchmarks with your own objects to see what works best for you.

Library Size

2.67 KB after minify and gzip without buffer shim.

8.83 KB after minify and gzip with buffer shim.

Installation

On the server, you can just copy schemapack.js in to your project folder and require it. (Remove the ./ if installed through npm)

var sp = require('./schemapack');

On the client, use webpack/browserify to automatically include the prerequisite buffer shim if you're not using it already.

For example, if you had a file index.js with the following:

var sp = require('./schemapack');
// More code here using schemapack

You can add the Buffer shim by typing browserify index.js > bundle.js and then including that file in your HTML.

<script type="text/javascript" src="bundle.js"></script>

Alternatively, just grab the built minified file from the build folder in the Github repository. Then add the following to your HTML page:

<script type="text/javascript" src="schemapack.min.js"></script>

This will attach it to the window object. In your JavaScript files, the variable will available as schemapack. This built file only needs to be used on the client, as the node server already includes the prerequisite Buffer. The server should use the unbundled version.

API

Build your schema:

var personSchema = sp.build({
    name: 'string',
    age: 'uint8',
    weight: 'float32'
}); // This parses, sorts, validates, flattens, and then saves the resulting schema.

Encode your objects:

var john = {
    name: 'John Smith',
    age: 32,
    weight: 188.5
};
var buffer = personSchema.encode(john);
console.log(buffer); // <Buffer 20 0a 4a 6f 68 6e 20 53 6d 69 74 68 43 3c 80 00>

Decode your buffers back to objects:

var object = personSchema.decode(buffer);
console.log(object.name); // John Smith
console.log(object.age); // 32
console.log(object.weight); // 188.5

Important array information:

The last item in arrays is both optional and able to be repeated. For example, with this schema:

var schema = sp.build({
    "numbers": [ "string", "uint8" ]
});

All of the following objects are valid for it:

var obj1 = { "numbers": [ "asdf" ] };
var obj2 = { "numbers": [ "asdf", 10 ] };
var obj3 = { "numbers": [ "asdf", 14, 7 ] };
var obj4 = { "numbers": [ "asdf", 0, 5, 7 ] };

The last item can also be an array or object, with any amount of nesting. Here's an example schema:

var schema = sp.build([
    { "name": "string", "numbers": [ "varint" ], "age": "uint8" }
]);

And here's an object that conforms to it:

var obj = [
    { "name": "joe", "numbers": [ -3, 2, 5 ], "age": 42 },
    { "name": "john smith iv", "numbers": [], "age": 27 },
    { "name": "bobby", "numbers": [ -22, 1 ], "age": 6 },
];

Set the encoding used for strings:

'utf8' is the default. If you only need to support English, changing the string encoding to 'ascii' can increase speed. Choose between 'ascii', 'utf8', 'utf16le', 'ucs2', 'base64', 'binary', and 'hex'.

sp.setStringEncoding('ascii');

Add type aliases:

sp.addTypeAlias('int', 'varuint');
var builtSchema = sp.build([ 'string', 'int' ]);
var buffer = builtSchema.encode([ 'dave', 1, 2, 3 ]);
var object = builtSchema.decode(buffer);
console.log(object); // [ 'dave', 1, 2, 3 ]

Validation

By default, validation is enabled. This means that the encode function will include checks to ensure passed objects match the schema.

The build function takes an optional parameter for validation. If set to false, the aforementioned checks will be excluded. Example:

var builtSchema = sp.build({ "sample": "string" }, false); // Validation checks won't be added to the encode function

To avoid having to pass this flag to each call of build, you can instead call setValidateByDefault to set the default validation strategy. Example:

sp.setValidateByDefault(false);

Setting the parameter to false will disable validation by default, while true will enable validation by default.

Make single item schemas:

var builtSchema = sp.build("varint");
var buffer = builtSchema.encode(-350);
var item = builtSchema.decode(buffer);
console.log(item); // -350

Here is a table of the available data types for use in your schemas:

Type Name Aliases Bytes Range of Values
bool boolean 1 True or false
int8 1 -128 to 127
uint8 1 0 to 255
int16 2 -32,768 to 32,767
uint16 2 0 to 65,535
int32 4 -2,147,483,648 to 2,147,483,647
uint32 4 0 to 4,294,967,295
float32 4 3.4E +/- 38 (7 digits)
float64 8 1.7E +/- 308 (15 digits)
string varuint length prefix followed by bytes of each character Any string
varuint 1 byte when 0 to 127
2 bytes when 128 to 16,383
3 bytes when 16,384 to 2,097,151
4 bytes when 2,097,152 to 268,435,455
etc.
0 to 2,147,483,647
varint 1 byte when -64 to 63
2 bytes when -8,192 to 8,191
3 bytes when -1,048,576 to 1,048,575
4 bytes when -134,217,728 to 134,217,727
etc.
-1,073,741,824 to 1,073,741,823
buffer varuint length prefix followed by bytes of buffer Any buffer

Tests

Just clone the repository, run npm install in the directory to get the testing framework (it also grabs other libraries for the benchmarks)

Then run npm test.

Compatibility

This library uses Buffer when in the node.js environment (always included) and the buffer shim when in the browser (included with browserify/webpack).

The travis tests pass with node versions ranging from 0.11.15 to the latest (6.3.1 at the time of writing).

License

MIT

Comments
  • Socket.IO Integration

    Socket.IO Integration

    Heya,

    Did you try it with Socket.IO already? I'm in the progress of adding this to my Socket.IO project.

    On the client side I do the following:

    var player = {
        health: 4000,
        jumping: false,
        position: [ -540, 343, 1201 ],
        attributes: { str: 87, agi: 42, int: 22 }
    }
    socket.emit(300, playerSchema.encode(player))
    

    Server side:

    socket.on(300, function(message) {
        if(message.type !== 'Buffer') {
            return
        }
    
        var player = playerSchema.decode(new Buffer(message.data))
        console.log(player)
    })
    

    How the message gets transmitted: 42[300, {type: "Buffer", data: [42, 22, 87, 160, 31, 0, 3, 253, 228, 1, 87, 4, 177]}]

    I didn't find a setting yet to switch to binary completely. (I assume Socket.IO doesn't support it due to the fallbacks to other transports. I may switch to a websocket solution only later.)

    Let's move to this line:

    var player = playerSchema.decode(new Buffer(message.data))
    

    Wouldn't it better if Schemapack transforms an array into a buffer?

    opened by Buffele 10
  • Is there anyway to use schemapack without browserfy?

    Is there anyway to use schemapack without browserfy?

    Hello!

    We really want to use this in our current project but using browserfy is not an option. Is there a way to use schemapack on the client without having to build a bundle.js? Can we include some dependencies in script tags directly to make it work?

    The server is no problem, just the client.

    Thanks!

    opened by Jared-Sprague 9
  • How to include a op code in the first byte always?

    How to include a op code in the first byte always?

    Hey @phretaddin!

    Thanks for all your help so far! I have another question. We want to convert all of our WebSocket messages that are currently using json to schemapack, but we have a problem: how to know which message is witch?

    We are using 'ws' package on the server and, standard native WebSocket on the client.

    Currently this is easy because when we send a json message one of the peroperties is 'op' e.g. From server: ws.send(JSON.stringify({op: 'welcome', message: 'welcome to our game'})); On client:

    ...
    switch (message.op) {
        case 'welcome':
        handle_msg_welcome(message);
        break;
    ...
    

    This is easy when using json, but when we convert to schemapack and everything is a buffer, and we're not using Socket.io that can include a string key with every message, we loose the ability to easily tell messages apart.

    Maybe you've already solved this problem in your own projects? One of the ways I was considering solving this was to make the first byte of every buffer a numeric op code in the schema e.g:

    var welcomeSchema = sp.build({
        op: "uint8',
        message: "string",
    });
    var playerSchema = sp.build({
        op: "uint8',
        health: "varuint",
        jumping: "boolean",
        position: [ "int16" ],
        attributes: { str: 'uint8', agi: 'uint8', int: 'uint8' }
    });
    

    Then read the first byte directly before deciding which schema to use to decode:

    var op == readFirstByte(buffer);
    switch (op) {
        case ops.WELCOME:
            handle_welcome( welcomeSchema.decode(buffer) );
            break;
        case ops.PLAYER:
            handle_player( playerSchema.decode(buffer) );
            break;
    ...
    

    What do you think of that idea? If that's a good solution, how do I guarantee that 'op' will always be in the first byte of the buffer? I know you mentioned that the object is sorted, maybe naming it with double underscore __op: "uint8" or something will force it to be the first in sort order?

    opened by Jared-Sprague 6
  • Support empty arrays and null values

    Support empty arrays and null values

    These tests fail for the explained reasons:

    var tests = require('./tests');
    tests.testValues({x: "varint"}, {x: null}) // converts value to 0 after decoding
    tests.testValues({arr: ["varint"]}, {arr: []}) // converts value to [0] after decoding
    tests.testValues({str: "string"}, {str: null}) // throws exception on encoding
    tests.testValues({arr: ["string"]}, {arr: []}) // throws exception on encoding
    tests.testValues({arr: ["string"]}, {arr: null}) // throws exception on encoding
    tests.testValues({obj: { y: "varint"}}, {obj: null}) // throws exception on encoding
    

    And there are probably other examples with other data types, but you get the idea.

    opened by Malkiz 5
  • disable sorting

    disable sorting

    I want to use this library to read already written binary files, porting in c/c++ structs -- but it's sorting the schema and that is totally screwing things up

    Please consider an option to disable the schema sorting!!

    opened by ajoslin103 4
  • NodeJS 6.4.0 Empty String Range Error

    NodeJS 6.4.0 Empty String Range Error

    Hello,

    Only affects latest NodeJS (6.4.0) Version. 6.3.1 worked fine.

    var sp = require('../schemapack')
    
    var stringSchema = sp.build('string')
    
    var data = stringSchema.encode('')
    var decoded = stringSchema.decode(data)
    
    console.log(decoded)
    

    Output:

            return this.utf8Write(string, offset, length);
                        ^
    
    RangeError: Offset is out of bounds
        at RangeError (native)
        at Buffer.write (buffer.js:761:21)
        at Object.writeString (D:\Projects\schemapack\schemapack.js:67:29)
        at eval (eval at getCompiledSchema (D:\Projects\schemapack\schemapack.js:374:24), <anonymous>:3:209)
        at Object.encode (D:\Projects\schemapack\schemapack.js:389:14)
        at Object.<anonymous> (D:\Projects\schempack_test\test.js:5:25)
        at Module._compile (module.js:556:32)
        at Object.Module._extensions..js (module.js:565:10)
        at Module.load (module.js:473:32)
        at tryModuleLoad (module.js:432:12)```
    
    opened by Buffele 3
  • Do array of object schemas have to be formated in JSON?

    Do array of object schemas have to be formated in JSON?

    Hey @phretaddin!

    I tried making a schema like this:

    var playersSchema = sp.build([{
        health: "varuint",
        jumping: "boolean",
        position: [ "int16" ],
        attributes: { str: 'uint8', agi: 'uint8', int: 'uint8' }
    }]);
    
        var players = [
        {
            health: 4000,
            jumping: true,
            position: [ -540, 343, 1201 ],
            attributes: { str: 87, agi: 42, int: 22 }
        },
        {
            health: 5000,
            jumping: false,
            position: [ -540, 343, 1201 ],
            attributes: { str: 87, agi: 42, int: 22 }
        }
    ];
    
    var buffer = playersSchema.encode(players);
    

    But was getting this error on the client: screenshot from 2016-08-10 19-37-18

    Then I noticed in person-array.js test this:

    [ { "name": "string", "numbers": [ "varint" ], "age": "uint8" } ]
    

    And that works, So it looks like if you want to have an array of objects schema, the object in the array schema has to be written in JSON?

    Thanks!

    opened by Jared-Sprague 3
  • Data Type Buffer

    Data Type Buffer

    What do you think about adding a new data type buffer where you're able to write a buffer to it? For example for uploading a picture via binary data. Of course I could use the binary message with websockets instead but I'd like to add more informations to the message like fileName (String), fileSize (UINT32), fileData (Buffer).

    opened by Buffele 3
  • More realistic benchmarks are

    More realistic benchmarks are "needed"

    If performance claims are made, I think it's important to contribute strong proofs over that statements.

    • Testing methodology : No info is available about...
      • The used machine specs.
      • Number of loop iterations
      • Number of "outter loop" iterations (to measure variance). In this case we know is 1, because the variance is not measured.
      • initial "warming up" process.
      • Which MessagePack implementation was used? (There are multiple implementations)
    • Cases 'matrix' :
      • Object with only scalar properties:
        • how does number of properties affects performance?
        • how does the scalar types affect performance?
      • Object with array properties (how does array length affects performance?)
      • Object with nested object properties (how does depth affects performance?)

    I know that the "benchmarks" are inside the repository, but this is far from being OK:

    • Benchmarks have been named as "tests", tests are no the same as benchmarks. If the benchmarks are being used as tests, it should be changed and start decoupling them.
    • Benchmarks need to be modified by hand (removing comment marks over some requires) before being executed for comparative purposes.
    • Some requires have to be done "by hand" before executing the benchmarks. Is possible to declare dev dependencies in the package.json file, it's better than the current approach.
    • Tests/benchmarks files are at the same directory level than the "production" code, and a file like "index" shouldn't be used as a way to execute the benchmarks/tests.
    opened by castarco 3
  • Security tests over

    Security tests over "fuzzed" inputs?

    It's very important to make security tests over "fuzzed" inputs. What happens if the serialized input is corrupted?

    I think it's probable that a big proportion of the performance gains over other serialization libraries is related to lack of security checks.

    opened by castarco 2
  • No mentions to (not object-wrapped) array serializations

    No mentions to (not object-wrapped) array serializations

    SchemaPack is directly compared with some serialization formats like JSON and MessagePack. Both of them have the ability to directly serialize arrays, without the need of wrapping them into "objects", in the SchemaPack documentation there is no info about how to directly serialize arrays without having to wrap them in objects.

    Is this possible? If no, maybe it should be convenient to explain it to make easier for other programmers which limitations has this serialization format.

    opened by castarco 2
  • '_placeholder' message when use schemapack with socket.io

    '_placeholder' message when use schemapack with socket.io

    I use schemapack with socket.io to reduce message size. message size reduce well but socket io send some weird messages before it send my binary message. githubask

    opened by DrMinh 0
  • Serialization isn't more fast then JSON.stringify in complex schemas

    Serialization isn't more fast then JSON.stringify in complex schemas

    When you add much string type properties, children objects, increase the encode/decode time.

    You can reproduce with this

    Schema:

    {
          _id: "string",
          index: "uint8",
          guid: "string",
          isActive: "boolean",
          balance: "float32",
          picture: "string",
          age: "uint8",
          eyeColor: "string",
          name: {
            first: "string",
            last: "string"
          },
          range: ["uint8"]
        }
    

    Packet sample:

    {
      _id: "5d93b9d70cbdf21c0c6f56bb",
      index: 0,
      guid: "4c63d6bb-0680-4b2d-9343-919c9892d837",
      isActive: false,
      balance: 3312.84,
      picture: "http://placehold.it/32x32",
      age: 36,
      eyeColor: "brown",
      name: {
        first: "Clements",
        last: "Alford"
      },
      range: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    }
    

    My results: schemapack: 232.398ms json: 162.347ms

    opened by ferco0 0
  • Shared schemas between client and server

    Shared schemas between client and server

    I'm trying to have a single file called shared.js to include on the server and client rather than dupe for both. But Can't seem to do this as I just get 'schemapack is undefined' when running with node...Even though it's declared and defined above it. Help would be appreciated... Thank you!

    Eg.

    shared.js


    const textMsg = schemapack.build({ __message: 'uint8', msg: 'string' });

    try { module.exports = {textMsg}; } catch (err) { //We are not running as node.js module }


    Server:


    const schemapack = require('schemapack'); require('./public/shared');


    Client:



    opened by Abul22 2
  • Non-lowercase type aliases don't work

    Non-lowercase type aliases don't work

    const sp = require('schemapack')
    sp.addTypeAlias('edgeId', 'uint32')
    sp.build('edgeId')
    

    The code above will throw a this error:

    TypeError: Invalid data type for schema: edgeId -> edgeid
        at getDataType (schemapack.js:22:51)
        at compileSchema (schemapack.js:349:24)
        at getCompiledSchema (schemapack.js:369:3)
        at Object.build (schemapack.js:382:21)
        ...
    

    This happens because before lookup, aliases get lowercased but before creation, they don't. In the docs I could find no mention of type names having to be lowercase.

    IMHO, the .toLowerCase() I linked above has to be removed.

    opened by raphinesse 0
Owner
null
JSON-Schema + fake data generators

Use JSON Schema along with fake generators to provide consistent and meaningful fake data for your system. What's next? Breaking-changes towards v0.5.

JSON Schema Faker 2.9k Jan 4, 2023
a little DSL that outputs JSON schema

kontur a little DSL that outputs JSON schema instruction on validating request body in koa using ajv and kontur overview import { compile, bool, int,

null 78 Nov 8, 2020
Create the next immutable state by mutating the current one

Immer Create the next immutable state tree by simply modifying the current tree Winner of the "Breakthrough of the year" React open source award and "

immer 24.2k Dec 31, 2022
The fastest JSON schema Validator. Supports JSON Schema draft-04/06/07/2019-09/2020-12 and JSON Type Definition (RFC8927)

Ajv JSON schema validator The fastest JSON validator for Node.js and browser. Supports JSON Schema draft-06/07/2019-09/2020-12 (draft-04 is supported

Ajv JSON schema validator 12k Jan 4, 2023
The culmination of Encode Academy, the 8-week long Solidity course by Encode Club in collaboration with Extropy

Encode Academy | DAO: Real Estate & Renting What is this? This is the culmination of Encode Academy, the 8-week long Solidity course by Encode Club in

Oliver H. D. 5 May 25, 2022
Helps to encode a string to base64 and decode a base64 string to a normal string.

@prashoonb/base64-encoder-decoder Installation npm install @prashoonb/base64-encoder-decoder API base64.encode(input) This function takes a byte strin

PrashoonB 4 Mar 29, 2022
Encode/Decode Bot Protections Payload

decode.antibot.to Open source tools to help you decode/encode sensor data. Features Browser decoding/encoding API decoding/encoding Usage PerimeterX E

null 15 Dec 25, 2022
A library with different methods to encode and decode data.

encryption_lib for Deno A library with different methods to encode and decode data. Usage Example: Caesar Cipher import { Caesar } from "https://deno.

null 6 Feb 3, 2022
A pure JavaScript QRCode encode and decode library.

QRCode A pure JavaScript QRCode encode and decode library. QRCode guide and demo QRCode guide QRCode example QRCode example use worker Modify from kaz

nuintun 135 Nov 28, 2022
Types generator will help user to create TS types from JSON. Just paste your single object JSON the Types generator will auto-generate the interfaces for you. You can give a name for the root object

Types generator Types generator is a utility tool that will help User to create TS Interfaces from JSON. All you have to do is paste your single objec

Vineeth.TR 16 Dec 6, 2022
Node.js object hash library with properties/arrays sorting to provide constant hashes. It also provides a method that returns sorted object strings that can be used for object comparison without hashes.

node-object-hash Tiny and fast node.js object hash library with properties/arrays sorting to provide constant hashes. It also provides a method that r

Alexander 73 Oct 7, 2022
Convert JSON examples into JSON schema (supports Swagger 2, OpenAPI 3 and 3.1)

json-to-json-schema Convert JSON examples into JSON schema. Supports JSON Schema draft-05 used in Swagger 2.0 and OpenAPI 3.0 and new draft draft-2020

Redocly 9 Sep 15, 2022
A simple CLI to generate a starter schema for keystone-6 from a pre-existing prisma schema.

Prisma2Keystone A tool for converting prisma schema to keystone schema typescript This is a proof of concept. More work is needed Usage npx prisma2key

Brook Mezgebu 17 Dec 17, 2022
Fast and low overhead web framework, for Node.js

An efficient server implies a lower cost of the infrastructure, a better responsiveness under load and happy users. How can you efficiently handle the

Fastify 26k Jan 2, 2023
Drop-in replacements for @apollo/client's useQuery, useMutation and useSubscription hooks with reduced overhead and additional functionality.

apollo-augmented-hooks Drop-in replacements for @apollo/client's useQuery, useMutation and useSubscription hooks with reduced overhead and additional

appmotion Devs 57 Nov 18, 2022
🍼 650B Virtual DOM - Use reactive JSX with minimal overhead

?? little-vdom Forked from developit's little-vdom gist. npm: npm i @luwes/little-vdom cdn: unpkg.com/@luwes/little-vdom 650B Virtual DOM Components S

wesley luyten 87 Nov 12, 2022
Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture, inspired by Hapi and Express.

Fastify is a web framework highly focused on providing the best developer experience with the least overhead and a powerful plugin architecture, inspired by Hapi and Express.

Jared Hanson 5 Oct 11, 2022
Create, sign & decode Solana transactions with minimum deps

micro-sol-signer Create, sign & decode Solana transactions with minimum deps. Tiny: 674 LOC, 3K LOC with all deps bundled No network code in main pack

Paul Miller 32 Nov 23, 2022
Create, sign & decode BTC transactions with minimum deps.

micro-btc-signer Create, sign & decode BTC transactions with minimum deps. ?? Small: ~2.2K lines Create transactions, inputs, outputs, sign them No ne

Paul Miller 19 Dec 30, 2022
JSON Hero is an open-source, beautiful JSON explorer for the web that lets you browse, search and navigate your JSON files at speed. 🚀

JSON Hero makes reading and understand JSON files easy by giving you a clean and beautiful UI packed with extra features.

JSON Hero 7.2k Jan 9, 2023