Bigquery transport for winston logger
import {WinstonBigQuery} from 'winston-bigquery';
import winston, {format} from 'winston';
const logger = winston.createLogger({
level: 'debug',
transports: [
.....
new WinstonBigQuery({
dataset: 'logs',
table: 'winston_logs',
})
.....
]
});
logger.info('Hello World', {
meta1: 1,
meta2: 'string',
meta3: {deepObj: 1}
});
in order to access bigquery we need a service account credentials, there are 3 ways to set it
- pass
applicationCredentials
containing a path to your key file in options - set
GOOGLE_APPLICATION_CREDENTIALS
environment settings - set
SERVICE_ACCOUNT
environment settings (recommended)
the latter was added since adding GOOGLE_APPLICATION_CREDENTIALS
is reported
to sometimes break other google sdks (such as firebase)
Google has its own log solution, Stackdriver Logging. Unfortunately, I find it messy , inconvenient, and hard to query. On the other hand , Bigquery has an excellent UI and easy sql-like querying capabilities, it is also optimized to search through HUGE amount of data.
winston-bigquery comes with its' own type definitions, so you wont have to use DefinitelyTyped
BigQuery need a schema for the its table. this can be achieved by 2 ways :
- create your schema manually
- use the
dropCreate:true
andschema:{...}
options in the constructor.
please refer the create-table example
the following field will always be auto-created for you
[
{
"name": "timestamp",
"type": "TIMESTAMP"
},
{
"name": "level",
"type": "STRING"
},
{
"name": "message",
"type": "STRING"
},
{
"name": "meta",
"type": "STRING"
}
]
everything outside the schema will automatically be flattened out, converted to string and pushed into the "meta" fieldl
later on you can query the json data with the built-in BigQuery functions for example :
SELECT t.*, JSON_EXTRACT(t.meta,"$.character_name") FROM `project.schema.table` t LIMIT 1000
npm i winston-bigquery
npm test