Skip to content

Latest commit

 

History

History
256 lines (213 loc) · 6.64 KB

README.mkd

File metadata and controls

256 lines (213 loc) · 6.64 KB

gron

Build Status

Make JSON greppable!

gron transforms JSON into discrete assignments to make it easier to grep for what you want and see the absolute 'path' to it. It eases the exploration of APIs that return large blobs of JSON but have terrible documentation.

gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author"
json[0].commit.author = {};
json[0].commit.author.date = "2016-07-02T10:51:21Z";
json[0].commit.author.email = "[email protected]";
json[0].commit.author.name = "Tom Hudson";

gron can work backwards too, enabling you to turn your filtered data back into JSON:

▶ gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author" | gron --ungron
[
  {
    "commit": {
      "author": {
        "date": "2016-07-02T10:51:21Z",
        "email": "[email protected]",
        "name": "Tom Hudson"
      }
    }
  }
]

Disclaimer: the GitHub API has fantastic documentation, but it makes for a good example.

Installation

gron has no runtime dependencies. You can just download a binary for Linux, Mac, Windows or FreeBSD and run it. Put the binary in your $PATH (e.g. in /usr/local/bin) to make it easy to use:

▶ tar xzf gron-linux-amd64-0.1.5.tgz
▶ sudo mv gron /usr/local/bin/

If you're a Mac user you can also install gron via brew:

▶ brew install gron

Or if you're a Go user you can use go get (if you're using Go 1.7 or newer):

▶ go get -u github.com/tomnomnom/gron

It's recommended that you alias ungron or norg (or both!) to gron --ungron. Put something like this in your shell profile (e.g. in ~/.bashrc):

alias norg="gron --ungron"
alias ungron="gron --ungron"

Or you could create a shell script in your $PATH named ungron or norg to affect all users:

gron --ungron "$@"

Usage

Get JSON from a file:

▶ gron testdata/two.json 
json = {};
json.contact = {};
json.contact.email = "[email protected]";
json.contact.twitter = "@TomNomNom";
json.github = "https://github.com/tomnomnom/";
json.likes = [];
json.likes[0] = "code";
json.likes[1] = "cheese";
json.likes[2] = "meat";
json.name = "Tom";

From a URL:

▶ gron http://headers.jsontest.com/
json = {};
json.Host = "headers.jsontest.com";
json["User-Agent"] = "gron/0.1";
json["X-Cloud-Trace-Context"] = "6917a823919477919dbc1523584ba25d/11970839830843610056";

Or from stdin:

▶ curl -s http://headers.jsontest.com/ | gron
json = {};
json.Accept = "*/*";
json.Host = "headers.jsontest.com";
json["User-Agent"] = "curl/7.43.0";
json["X-Cloud-Trace-Context"] = "c70f7bf26661c67d0b9f2cde6f295319/13941186890243645147";

Grep for something and easily see the path to it:

▶ gron testdata/two.json | grep twitter
json.contact.twitter = "@TomNomNom";

gron makes diffing JSON easy too:

▶ diff <(gron two.json) <(gron two-b.json)
3c3
< json.contact.email = "[email protected]";
---
> json.contact.email = "[email protected]";

The output of gron is valid JavaScript:

▶ gron testdata/two.json > tmp.js
▶ echo "console.log(json);" >> tmp.js
▶ nodejs tmp.js
{ contact: { email: '[email protected]', twitter: '@TomNomNom' },
  github: 'https://github.com/tomnomnom/',
  likes: [ 'code', 'cheese', 'meat' ],
  name: 'Tom' }

It's also possible to obtain the gron output as JSON stream via the --json switch:

▶ curl -s http://headers.jsontest.com/ | gron --json
[[],{}]
[["Accept"],"*/*"]
[["Host"],"headers.jsontest.com"]
[["User-Agent"],"curl/7.43.0"]
[["X-Cloud-Trace-Context"],"c70f7bf26661c67d0b9f2cde6f295319/13941186890243645147"]

ungronning

gron can also turn its output back into JSON:

▶ gron testdata/two.json | gron -u
{
  "contact": {
    "email": "[email protected]",
    "twitter": "@TomNomNom"
  },
  "github": "https://github.com/tomnomnom/",
  "likes": [
    "code",
    "cheese",
    "meat"
  ],
  "name": "Tom"
}

This means you use can use gron with grep and other tools to modify JSON:

▶ gron testdata/two.json | grep likes | gron --ungron
{
  "likes": [
    "code",
    "cheese",
    "meat"
  ]
}

or

▶ gron --json testdata/two.json | grep likes | gron  --json --ungron
{
  "likes": [
    "code",
    "cheese",
    "meat"
  ]
}

To preserve array keys, arrays are padded with null when values are missing:

▶ gron testdata/two.json | grep likes | grep -v cheese
json.likes = [];
json.likes[0] = "code";
json.likes[2] = "meat";
▶ gron testdata/two.json | grep likes | grep -v cheese | gron --ungron
{
  "likes": [
    "code",
    null,
    "meat"
  ]
}

If you get creative you can do some pretty neat tricks with gron, and then ungron the output back into JSON.

Get Help

▶ gron --help
Transform JSON (from a file, URL, or stdin) into discrete assignments to make it greppable

Usage:
  gron [OPTIONS] [FILE|URL|-]

Options:
  -u, --ungron     Reverse the operation (turn assignments back into JSON)
  -c, --colorize   Colorize output (default on tty)
  -m, --monochrome Monochrome (don't colorize output)
  -s, --stream     Treat each line of input as a separate JSON object
  -k, --insecure   Disable certificate validation
  -j, --json       Represent gron data as JSON stream
      --no-sort    Don't sort output (faster)
      --version    Print version information

Exit Codes:
  0	OK
  1	Failed to open file
  2	Failed to read input
  3	Failed to form statements
  4	Failed to fetch URL
  5	Failed to parse statements
  6	Failed to encode JSON

Examples:
  gron /tmp/apiresponse.json
  gron http://jsonplaceholder.typicode.com/users/1 
  curl -s http://jsonplaceholder.typicode.com/users/1 | gron
  gron http://jsonplaceholder.typicode.com/users/1 | grep company | gron --ungron

FAQ

Wasn't this written in PHP before?

Yes it was! The original version is preserved here for posterity.

Why the change to Go?

Mostly to remove PHP as a dependency. There's a lot of people who work with JSON who don't have PHP installed.

Why shouldn't I just use jq?

jq is awesome, and a lot more powerful than gron, but with that power comes complexity. gron aims to make it easier to use the tools you already know, like grep and sed.

gron's primary purpose is to make it easy to find the path to a value in a deeply nested JSON blob when you don't already know the structure; much of jq's power is unlocked only once you know that structure.