diff --git a/CHANGELOG.md b/CHANGELOG.md index 181db35..81764d5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,10 @@ +## [2.0.4](https://github.com/armand1m/papercut/compare/v2.0.3...v2.0.4) (2021-11-15) + + +### Bug Fixes + +* make jsdom and pino peer dependencies ([5aabad2](https://github.com/armand1m/papercut/commit/5aabad246c45127f9a3f5b23f18e1aa407410704)) + ## [2.0.3](https://github.com/armand1m/papercut/compare/v2.0.2...v2.0.3) (2021-11-15) diff --git a/docs/index.html b/docs/index.html index d98a6f4..cc5485f 100644 --- a/docs/index.html +++ b/docs/index.html @@ -2,8 +2,14 @@

Papercut

+

NPM JavaScript Style Guide +codecov +bundlephobia +bundlephobia

+

Papercut is a scraping/crawling library for Node.js, written in Typescript.

-

It provides a type-safe and small foundation that makes it fairly easy to scrape webpages with confidence.

+
+

Papercut provides a small type-safe and tested foundation that makes it fairly easy to scrape webpages with confidence.

Features

@@ -63,8 +69,8 @@

Quick example

Create an empty project with yarn:

mkdir papercut-demo
cd papercut-demo
yarn init -y
-

Add papercut:

-
yarn add @armand1m/papercut
+

Add papercut and the needed peer dependencies:

+
yarn add @armand1m/papercut jsdom pino
 
diff --git a/docs/interfaces/CreateRunnerProps.html b/docs/interfaces/CreateRunnerProps.html index 8d5aa4d..90ce9bf 100644 --- a/docs/interfaces/CreateRunnerProps.html +++ b/docs/interfaces/CreateRunnerProps.html @@ -1,6 +1,6 @@ -CreateRunnerProps | @armand1m/papercut

Interface CreateRunnerProps

Hierarchy

  • CreateRunnerProps

Index

Properties

Properties

logger

logger: Logger
+CreateRunnerProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface CreateRunnerProps

Hierarchy

  • CreateRunnerProps

Index

Properties

Properties

logger

logger: Logger

A pino.Logger instance.

-

options

+

options

The scraper options. Use this to tweak log, cache and concurrency settings.

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/interfaces/GeosearchResult.html b/docs/interfaces/GeosearchResult.html index efa8a11..0a25aba 100644 --- a/docs/interfaces/GeosearchResult.html +++ b/docs/interfaces/GeosearchResult.html @@ -1 +1 @@ -GeosearchResult | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface GeosearchResult

Hierarchy

  • GeosearchResult

Index

Properties

latitude

latitude: number

longitude

longitude: number

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file +GeosearchResult | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface GeosearchResult

Hierarchy

  • GeosearchResult

Index

Properties

latitude

latitude: number

longitude

longitude: number

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/interfaces/RunProps.html b/docs/interfaces/RunProps.html index 6ceab20..9b7ce32 100644 --- a/docs/interfaces/RunProps.html +++ b/docs/interfaces/RunProps.html @@ -1,7 +1,7 @@ -RunProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface RunProps<T, B>

Type parameters

Hierarchy

  • RunProps

Index

Properties

baseUrl

baseUrl: string
+RunProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface RunProps<T, B>

Type parameters

Hierarchy

  • RunProps

Index

Properties

baseUrl

baseUrl: string

The base url to start scraping off.

This page will be fetched, parsed and mounted in a virtual JSDOM instance.

-

Optional pagination

pagination?: PaginationOptions
+

Optional pagination

pagination?: PaginationOptions

Optional pagination feature.

If enabled and configured, this will make papercut fetch, parse, mount and scrape multiple pages based @@ -9,14 +9,14 @@

As long as you have a way to fetch the last page number from the page you're scraping, and use it as a query param in the page url, you should be fine.

-

selectors

selectors: T
+

selectors

selectors: T

The selectors to be used during the scraping process.

The result object will match the schema of the selectors.

-

strict

strict: B
+

strict

strict: B

If enabled, this will make Papercut scrape the page in strict mode. This means that in case a selector function fails, the entire scraping will be halted with an error.

When enabled, the result types will not expect undefined values.

-

target

target: string
+

target

target: string

The DOM selector for the target nodes to be scraped.

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/interfaces/ScrapeProps.html b/docs/interfaces/ScrapeProps.html index 1c14a20..c54067b 100644 --- a/docs/interfaces/ScrapeProps.html +++ b/docs/interfaces/ScrapeProps.html @@ -1 +1 @@ -ScrapeProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScrapeProps<T, B>

Type parameters

Hierarchy

  • ScrapeProps

Index

Properties

document

document: Document

logger

logger: Logger

options

selectors

selectors: T

strict

strict: B

target

target: string

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file +ScrapeProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScrapeProps<T, B>

Type parameters

Hierarchy

  • ScrapeProps

Index

Properties

document

document: Document

logger

logger: Logger

options

selectors

selectors: T

strict

strict: B

target

target: string

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/interfaces/ScraperOptions.html b/docs/interfaces/ScraperOptions.html index 25cdc09..70b46a9 100644 --- a/docs/interfaces/ScraperOptions.html +++ b/docs/interfaces/ScraperOptions.html @@ -1,9 +1,9 @@ -ScraperOptions | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScraperOptions

Hierarchy

  • ScraperOptions

Index

Properties

cache

cache: boolean
+ScraperOptions | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScraperOptions

Hierarchy

  • ScraperOptions

Index

Properties

cache

cache: boolean

Enables HTML payload caching on the disk. Keep in mind that papercut will not clear the cache for you. When enabling this, it's your responsability to deal with cache invalidation.

default

false

-

concurrency

concurrency: { node: number; page: number; selector: number }
+

concurrency

concurrency: { node: number; page: number; selector: number }

Concurrency settings.

Type declaration

  • node: number

    Amount of concurrent promises for node scraping.

    @@ -14,7 +14,7 @@
  • selector: number

    Amount of concurrent promises for selector scraping.

    default

    2

    -

log

log: boolean
+

log

log: boolean

Enables writing pino logs to the stdout.

default

process.env.DEBUG === "true"

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/interfaces/ScraperProps.html b/docs/interfaces/ScraperProps.html index 4f410aa..5e34bb7 100644 --- a/docs/interfaces/ScraperProps.html +++ b/docs/interfaces/ScraperProps.html @@ -1,7 +1,7 @@ -ScraperProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScraperProps

Hierarchy

  • ScraperProps

Index

Properties

Properties

name

name: string
+ScraperProps | @armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

Interface ScraperProps

Hierarchy

  • ScraperProps

Index

Properties

Properties

name

name: string

The scraper name. This will be used only for logging purposes.

-

Optional options

options?: Partial<ScraperOptions>
+

Optional options

options?: Partial<ScraperOptions>

The scraper options. Use this to tweak log, cache and concurrency settings.

Legend

  • Property

Settings

Theme

Generated using TypeDoc

\ No newline at end of file diff --git a/docs/modules.html b/docs/modules.html index 524163e..6d9ebcf 100644 --- a/docs/modules.html +++ b/docs/modules.html @@ -1,12 +1,12 @@ -@armand1m/papercut
Options
All
  • Public
  • Public/Protected
  • All
Menu

@armand1m/papercut

Index

Type aliases

ScrapeResultType

ScrapeResultType<T, B>: B extends true ? { [ Prop in keyof T]: Awaited<ReturnType<T[Prop]>> } : { [ Prop in keyof T]?: Awaited<ReturnType<T[Prop]>> }

Type parameters

Scraper

Scraper: ReturnType<typeof createScraper>

SelectorFunction

SelectorFunction: (utils: SelectorUtilities, self: SelectorMap) => any

Type declaration

SelectorMap

SelectorMap: Record<string, SelectorFunction>

Map of selector functions.

This type is meant to be checked with an extended type, as users are going to implement a derived version of this for custom scrapers.

-

SelectorUtilities

SelectorUtilities: ReturnType<typeof createSelectorUtilities>

Functions

Const createRunner

SelectorUtilities

SelectorUtilities: ReturnType<typeof createSelectorUtilities>

Functions

Const createRunner

  • Creates a runner instance.

    This method is called by the createScraper function, but can also be externally used if needed to use an @@ -33,7 +33,7 @@

Parameters

  • props: RunProps<T, B>

    The scraping runner properties and selectors.

Returns Promise<ScrapeResultType<T, B>[]>

result Type-safe scraping results based on the given selectors and strict mode.

-

Const createScraper

Const createScraper

  • Creates a new scraper runner.

    This method is papercut entrypoint. It will create an Scraper struct containing a runner that you can tweak @@ -63,7 +63,7 @@

Parameters

  • props: RunProps<T, B>

    The scraping runner properties and selectors.

Returns Promise<ScrapeResultType<T, B>[]>

result Type-safe scraping results based on the given selectors and strict mode.

-

Const createSelectorUtilities

  • createSelectorUtilities(element: Element): { all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }; attr: (selector: string, attribute: string) => string; className: (selector: string) => string; createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }; element: Element; fetchPage: (url: string) => Promise<string>; geosearch: (q: string, limit?: number) => Promise<GeosearchResult>; href: (selector: string) => string; mapNodeListToArray: (nodeList: NodeList) => Element[]; src: (selector: string) => string; text: (selector: string) => string }

Const createSelectorUtilities

  • createSelectorUtilities(element: Element): { all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }; attr: (selector: string, attribute: string) => string; className: (selector: string) => string; createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }; element: Element; fetchPage: (url: string) => Promise<string>; geosearch: (q: string, limit?: number) => Promise<GeosearchResult>; href: (selector: string) => string; mapNodeListToArray: (nodeList: NodeList) => Element[]; src: (selector: string) => string; text: (selector: string) => string }
  • This method creates the selector utilities provided to every selector function given to the scrape method.

    These utilities are meant to make the experience of @@ -74,7 +74,7 @@ fallback of an empty string, in case it fails to find the element or a specific property.

    At the same time, you also have direct access to the elementfrom selector functions if needed for more complex tasks.

    -

    Parameters

    • element: Element

    Returns { all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }; attr: (selector: string, attribute: string) => string; className: (selector: string) => string; createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }; element: Element; fetchPage: (url: string) => Promise<string>; geosearch: (q: string, limit?: number) => Promise<GeosearchResult>; href: (selector: string) => string; mapNodeListToArray: (nodeList: NodeList) => Element[]; src: (selector: string) => string; text: (selector: string) => string }

    • all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }
        • (selector: string): { asArray: Element[]; nodes: NodeListOf<Element> }
        • Parameters

          • selector: string

          Returns { asArray: Element[]; nodes: NodeListOf<Element> }

          • asArray: Element[]
          • nodes: NodeListOf<Element>
    • attr: (selector: string, attribute: string) => string
        • (selector: string, attribute: string): string
        • Parameters

          • selector: string
          • attribute: string

          Returns string

    • className: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }
        • (htmlContent: string): { close: () => void; document: Document; window: DOMWindow }
        • Parameters

          • htmlContent: string

          Returns { close: () => void; document: Document; window: DOMWindow }

          • close: () => void
              • (): void
              • Returns void

          • document: Document
          • window: DOMWindow
    • element: Element
    • fetchPage: (url: string) => Promise<string>
        • (url: string): Promise<string>
        • Parameters

          • url: string

          Returns Promise<string>

    • geosearch: (q: string, limit?: number) => Promise<GeosearchResult>
    • href: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • mapNodeListToArray: (nodeList: NodeList) => Element[]
        • (nodeList: NodeList): Element[]
        • Parameters

          • nodeList: NodeList

          Returns Element[]

    • src: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • text: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

Const geosearch

scrape

  • +

    Parameters

    • element: Element

    Returns { all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }; attr: (selector: string, attribute: string) => string; className: (selector: string) => string; createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }; element: Element; fetchPage: (url: string) => Promise<string>; geosearch: (q: string, limit?: number) => Promise<GeosearchResult>; href: (selector: string) => string; mapNodeListToArray: (nodeList: NodeList) => Element[]; src: (selector: string) => string; text: (selector: string) => string }

    • all: (selector: string) => { asArray: Element[]; nodes: NodeListOf<Element> }
        • (selector: string): { asArray: Element[]; nodes: NodeListOf<Element> }
        • Parameters

          • selector: string

          Returns { asArray: Element[]; nodes: NodeListOf<Element> }

          • asArray: Element[]
          • nodes: NodeListOf<Element>
    • attr: (selector: string, attribute: string) => string
        • (selector: string, attribute: string): string
        • Parameters

          • selector: string
          • attribute: string

          Returns string

    • className: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • createWindow: (htmlContent: string) => { close: () => void; document: Document; window: DOMWindow }
        • (htmlContent: string): { close: () => void; document: Document; window: DOMWindow }
        • Parameters

          • htmlContent: string

          Returns { close: () => void; document: Document; window: DOMWindow }

          • close: () => void
              • (): void
              • Returns void

          • document: Document
          • window: DOMWindow
    • element: Element
    • fetchPage: (url: string) => Promise<string>
        • (url: string): Promise<string>
        • Parameters

          • url: string

          Returns Promise<string>

    • geosearch: (q: string, limit?: number) => Promise<GeosearchResult>
    • href: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • mapNodeListToArray: (nodeList: NodeList) => Element[]
        • (nodeList: NodeList): Element[]
        • Parameters

          • nodeList: NodeList

          Returns Element[]

    • src: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

    • text: (selector: string) => string
        • (selector: string): string
        • Parameters

          • selector: string

          Returns string

Const geosearch

scrape

  • the scrape function

    this function will select all target nodes from the given document and spawn promise pools for diff --git a/package.json b/package.json index cabe29f..77549f6 100644 --- a/package.json +++ b/package.json @@ -1,5 +1,5 @@ { - "version": "2.0.3", + "version": "2.0.4", "license": "MIT", "main": "dist/index.js", "types": "dist/index.d.ts",