diff --git a/doc/features.md b/doc/features.md index f941232..e0c9728 100644 --- a/doc/features.md +++ b/doc/features.md @@ -1,182 +1,231 @@ # Features -The **CROWler** is a comprehensive web crawling and scraping tool designed to perform various tasks related to web content discovery and data collection. Below is a detailed list of its features, along with their descriptions and benefits: +The **CROWler** is a comprehensive web crawling and scraping tool designed to perform various tasks related to web content discovery and data collection. Below is a detailed list of its features, along with their descriptions and benefits. -- **(Feature 1) Web Crawling** +## Table of Contents - - **Recursive Crawling**: Supports deep crawling of websites, following links recursively to discover new content. - - *Benefits*: Enables thorough exploration of websites to uncover hidden pages and data. +- [Web Crawling](#features-group-1-web-crawling) +- [API-Based Search Engine](#features-group-2-powerful-api-based-search-engine) +- [Web Scraping](#features-group-3-web-scraping) +- [Action Execution](#features-group-4-action-execution) +- [Technology Detection](#features-group-5-technology-detection) +- [Network Information Collection](#features-group-6-network-information-collection) +- [Image and File Collection](#features-group-7-image-and-file-collection) +- [API Integration](#features-group-8-api-integration) +- [Comprehensive Ruleset System](#features-group-9-comprehensive-ruleset-system) +- [Plugin Support](#features-group-10-plugin-support) +- [Data Storage and Management](#features-group-11-data-storage-and-management) +- [Configuration and Scalability](#features-group-12-configuration-and-scalability) +- [Security and Privacy](#features-group-13-security-and-privacy) +- [Error Handling and Logging](#features-group-14-error-handling-and-logging) +- [User Interface and Console](#features-group-15-user-interface-and-console) +- [Cybersecurity Features](#features-group-16-cybersecurity-features) +- [Containerization](#features-group-17-containerization) - - **Human Browsing Mode**: Simulates human-like browsing behavior to access content that might be blocked by automated bots. This is part of the Human Behavior Simulation (HBS) architecture. - - *Benefits*: Helps bypass basic bot detection mechanisms to access dynamic content. +## (Features Group 1) Web Crawling - - **Fuzzing Mode**: Automatically tests web pages with various inputs to discover hidden functionalities and vulnerabilities. - - *Benefits*: Aids in security testing by discovering potential weaknesses in web applications. +- **Recursive Crawling**: Supports deep crawling of websites, following links recursively to discover new content. + - *Benefits*: Enables thorough exploration of websites to uncover hidden pages and data. - - **Customizable Browsing Speed**: Allows users to configure the speed of crawling to avoid overloading servers, being detected, or triggering anti-bot mechanisms. Speed is also configurable at runtime and per source, allowing for more human-like behavior. - - *Benefits*: Prevents excessive traffic to target websites, ensuring minimal impact on their performance and stability while reducing the risk of being blocked. +- **Human Browsing Mode**: Simulates human-like browsing behavior to access content that might be blocked by automated bots. This is part of the Human Behavior Simulation (HBS) architecture. + - *Benefits*: Helps bypass basic bot detection mechanisms to access dynamic content. - - **Human Behavior Simulation (HBS)**: A system architecture designed to mimic human-like browsing patterns to avoid detection by anti-bot systems. - - *Benefits*: Enhances low-noise operations and reduces the risk of being blocked by websites and proxy services. +- **Fuzzing Mode**: Automatically tests web pages with various inputs to discover hidden functionalities and vulnerabilities. + - *Benefits*: Aids in security testing by discovering potential weaknesses in web applications. - - **Dynamic Content Handling**: Supports the execution of JavaScript to access dynamically generated content. - - *Benefits*: Allows access to content that is rendered dynamically by client-side scripts. +- **Customizable Browsing Speed**: Allows users to configure the speed of crawling to avoid overloading servers, being detected, or triggering anti-bot mechanisms. Speed is also configurable at runtime and per source, allowing for more human-like behavior. + - *Benefits*: Prevents excessive traffic to target websites, ensuring minimal impact on their performance and stability while reducing the risk of being blocked. - - **Keyword Extraction**: Extracts keywords from web pages to identify relevant topics and themes. - - *Benefits*: Helps categorize and organize content for analysis and indexing. Keywords can also be used in security searches and events to identify sources of interest. +- **Human Behavior Simulation (HBS)**: A system architecture designed to mimic human-like browsing patterns to avoid detection by anti-bot systems. + - *Benefits*: Enhances low-noise operations and reduces the risk of being blocked by websites and proxy services. - - **Site Language Detection**: Detects the language of a website to support multilingual crawling and content analysis. Even in the absence of language tags, CROWler can detect the language of a page. - - *Benefits*: Facilitates language-specific processing and analysis of web content. +- **Dynamic Content Handling**: Supports the execution of JavaScript to access dynamically generated content. + - *Benefits*: Allows access to content that is rendered dynamically by client-side scripts. - - **Content Analysis**: Analyzes the content of web pages to extract metadata, entities, and other structured information. - - *Benefits*: Provides insights into the content of web pages for categorization, indexing, and analysis. +- **Keyword Extraction**: Extracts keywords from web pages to identify relevant topics and themes. + - *Benefits*: Helps categorize and organize content for analysis and indexing. Keywords can also be used in security searches and events to identify sources of interest. - - **Source Categorization**: Allows users to define categories for sources (URLs), which can be used to filter and prioritize crawling operations, as well as for marketing and security operations. - - *Benefits*: Enables category-based correlation and analysis of data. +- **Site Language Detection**: Detects the language of a website to support multilingual crawling and content analysis. Even in the absence of language tags, CROWler can detect the language of a page. + - *Benefits*: Facilitates language-specific processing and analysis of web content. -- **(Feature 2) Powerful API-Based Search Engine** +- **Content Analysis**: Analyzes the content of web pages to extract metadata, entities, and other structured information. + - *Benefits*: Provides insights into the content of web pages for categorization, indexing, and analysis. - - **Advanced Search Queries**: Supports complex search queries using operators like AND (&&) and OR (||), and "" for precise search results. - - *Benefits*: Facilitates targeted searches to retrieve specific information from web pages. +- **Source Categorization**: Allows users to define categories for sources (URLs), which can be used to filter and prioritize crawling operations, as well as for marketing and security operations. + - *Benefits*: Enables category-based correlation and analysis of data. - - **Search Result Analysis**: Analyzes search results to extract relevant information such as titles, snippets, and URLs. - - *Benefits*: Helps identify relevant content quickly and efficiently. +## (Features Group 2) Powerful API-Based Search Engine - - **Search Result Export**: Allows exporting search results in various formats like CSV and JSON. - - *Benefits*: Facilitates further processing and analysis of search results. +- **Advanced Search Queries**: Supports complex search queries using operators like AND (&&) and OR (||), and "" for precise search results. + - *Benefits*: Facilitates targeted searches to retrieve specific information from web pages. - - **Dorking Techniques**: Supports advanced search techniques like Google Dorking to discover sensitive information and vulnerabilities. - - *Benefits*: Useful for security assessments and reconnaissance. +- **Search Result Analysis**: Analyzes search results to extract relevant information such as titles, snippets, and URLs. + - *Benefits*: Helps identify relevant content quickly and efficiently. - - **Entity Correlation**: Correlates entities extracted from search results to identify relationships and patterns. - - *Benefits*: Provides insights into the connections between entities across different sources. +- **Search Result Export**: Allows exporting search results in various formats like CSV and JSON. + - *Benefits*: Facilitates further processing and analysis of search results. -- **(Feature 3) Web Scraping** +- **Dorking Techniques**: Supports advanced search techniques like Google Dorking to discover sensitive information and vulnerabilities. + - *Benefits*: Useful for security assessments and reconnaissance. - - **Customizable Scraping Rules**: Users can define specific rules for data extraction using CSS selectors, XPath, and other methods. - - *Benefits*: Provides flexibility to extract specific data points from web pages as per user requirements. +- **Entity Correlation**: Correlates entities extracted from search results to identify relationships and patterns. + - *Benefits*: Provides insights into the connections between entities across different sources. - - **Post-Processing of Scraped Data**: Includes steps to transform, clean, and validate data after extraction, as well as to enrich it with additional information, metadata, and annotations using plugins and AI models. - - *Benefits*: Ensures the quality and usability of the collected data. +- **High Performance API**: Provides a high-performance API for querying and retrieving search results. + - *Benefits*: Ensures fast and efficient access to search data. -- **(Feature 4) Action Execution** +## (Features Group 3) Web Scraping - - **Automated Interactions**: Can perform actions like clicking, filling out forms, and navigating websites programmatically. Actions are executed at the SYSTEM level, making CROWler undetectable by most anti-bot systems. This is part of the Human Behavior Simulation (HBS) architecture. - - *Benefits*: Enables the automation of repetitive tasks, improving efficiency in data collection. +- **Customizable Scraping Rules**: Users can define specific rules for data extraction using CSS selectors, XPath, and other methods. + - *Benefits*: Provides flexibility to extract specific data points from web pages as per user requirements. - - **Advanced Interactions**: Supports complex interactions like drag-and-drop, mouse hover, and keyboard inputs. - - *Benefits*: Allows handling sophisticated user interface elements that require advanced manipulation. +- **Post-Processing of Scraped Data**: Includes steps to transform, clean, and validate data after extraction, as well as to enrich it with additional information, metadata, and annotations using plugins and AI models. + - *Benefits*: Ensures the quality and usability of the collected data. -- **(Feature 5) Technology Detection** +- **Data Transformation**: Supports data transformation operations like normalization, aggregation, and filtering. + - *Benefits*: Helps prepare data for analysis and integration with other systems. - - **Framework and Technology Identification**: Uses detection rules to identify: - - Technologies (e.g., servers, programming languages, plugins) - - Frameworks (e.g., server-side CMS, client-side JavaScript libraries) - - Libraries - - Vulnerabilities (e.g., outdated software versions, known security issues, XSS, SQL injection, and more) - - *Benefits*: Provides insights into the tech stack of a site, which can be useful for competitive analysis or vulnerability assessment. +- **Data Enrichment**: Enhances scraped data with additional information from external sources or AI models. + - *Benefits*: Improves the quality and relevance of the collected data. -- **(Feature 6) Network Information Collection** +- **3rd party Integration**: Integrates with third-party services and APIs to enrich scraped data with external information. + - *Benefits*: Provides access to a wide range of external data sources for data enrichment. - - **DNS and WHOIS Lookup**: Performs DNS resolution and WHOIS queries to gather domain information. - - *Benefits*: Facilitates understanding of domain ownership and network infrastructure. +## (Features Group 4) Action Execution - - **Service Scout**: Detects services running on a host using various scanning techniques. Service Scout can be extended via Nmap plugins. - - *Benefits*: Useful in security assessments for identifying: - - Open ports and services - - Vulnerabilities - - Test protocols and services +- **Automated Interactions**: Can perform actions like clicking, filling out forms, and navigating websites programmatically. Actions are executed at the SYSTEM level, making CROWler undetectable by most anti-bot systems. This is part of the Human Behavior Simulation (HBS) architecture. + - *Benefits*: Enables the automation of repetitive tasks, improving efficiency in data collection. -- **(Feature 7) Image and File Collection** +- **Advanced Interactions**: Supports complex interactions like drag-and-drop, mouse hover, and keyboard inputs. + - *Benefits*: Allows handling sophisticated user interface elements that require advanced manipulation. - - **Automated Collection**: Collects images and files from websites during the crawling process. - - *Benefits*: Enables gathering of rich media content alongside textual data. +## (Features Group 5) Technology Detection - - **Full Web Page Screenshots**: Captures full-page screenshots (including websites with "infinite scrolling") of web pages for visual analysis and archiving. - - *Benefits*: Provides a visual representation of web pages for reference and analysis. +- **Framework and Technology Identification**: Uses detection rules to identify: + - Technologies (e.g., servers, programming languages, plugins) + - Frameworks (e.g., server-side CMS, client-side JavaScript libraries) + - Libraries + - Vulnerabilities (e.g., outdated software versions, known security issues, XSS, SQL injection, and more) + - *Benefits*: Provides insights into the tech stack of a site, which can be useful for competitive analysis or vulnerability assessment. -- **(Feature 8) API Integration** +- **Fingerprinting Techniques**: Employs fingerprinting techniques like HTTP headers, cookies, and JavaScript objects to identify technologies. - - **REST API**: Provides an API for integrating with other systems and managing CROWler's operations programmatically. - - *Benefits*: Facilitates automation and integration with existing data processing pipelines. +- **Vulnerability Detection**: Detects known vulnerabilities in web applications and services. + - *Benefits*: Helps identify security weaknesses that need to be addressed. -- **(Feature 9) Comprehensive Ruleset System** +- **Security Headers Analysis**: Analyzes security headers like Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and others to assess the security posture of a website. + - *Benefits*: Provides insights into the security measures implemented by a website. - - **Ruleset Architecture**: Supports a comprehensive ruleset system for defining custom crawling and scraping rules. Specifically, four types of rules: - - **Crawling Rules**: Define how to navigate a website. - - **Scrape Rules**: Define what to extract from a page. - - **Action Rules**: Define what to do on a page. - - **Detection Rules**: Define what (and how) to detect technologies and vulnerabilities. - - Ruleset architecture is declarative (can be expressed in both YAML and JSON) and can be shared across instances and updated dynamically. - - Ruleset architecture can be extended with JavaScript plugins. - - *Benefits*: Allows users to define complex logic for data extraction and processing, site navigation, and technology detection. +- **SSL/TLS Analysis**: Analyzes SSL/TLS certificates and configurations to identify security risks and compliance issues. + - *Benefits*: Helps ensure secure communication between clients and servers. - - **Ruleset Management**: Provides tools for managing and sharing rulesets across different instances. - - *Benefits*: Enhances reusability and collaboration among users. +- **3rd party Integration**: Integrates with third-party services like Shodan, VirusTotal, and others to gather additional information about web assets. + - *Benefits*: Provides access to external threat intelligence and security data. -- **(Feature 10) Plugin Support** +## (Features Group 6) Network Information Collection - - **JavaScript Plugins**: Supports custom JavaScript plugins for extending functionality. - - *Benefits*: Allows customization and enhancement of CROWler's capabilities to meet specific needs. +- **DNS and WHOIS Lookup**: Performs DNS resolution and WHOIS queries to gather domain information. + - *Benefits*: Facilitates understanding of domain ownership and network infrastructure. -- **(Feature 11) Data Storage and Management** +- **Service Scout**: Detects services running on a host using various scanning techniques. Service Scout can be extended via Nmap plugins. + - *Benefits*: Useful in security assessments for identifying: + - Open ports and services + - Vulnerabilities + - Test protocols and services - - **Database Integration**: Stores collected data in a structured format in databases like PostgreSQL. - - *Benefits*: Ensures organized and easily retrievable data for analysis. +## (Features Group 7) Image and File Collection - - **File Storage Options**: Configurable storage for images and other media files. - - *Benefits*: Enables efficient handling of large volumes of media content. +- **Automated Collection**: Collects images and files from websites during the crawling process. + - *Benefits*: Enables gathering of rich media content alongside textual data. -- **(Feature 12) Configuration and Scalability** +- **Full Web Page Screenshots**: Captures full-page screenshots (including websites with "infinite scrolling") of web pages for visual analysis and archiving. + - *Benefits*: Provides a visual representation of web pages for reference and analysis. - - **Configurable Environment**: Supports detailed configuration options for customizing crawling and scraping behavior. - - *Benefits*: Provides flexibility to adapt to different use cases and environments. +## (Features Group 8) API Integration - - **Scalability**: Supports multiple workers and Selenium drivers to handle large-scale operations. - - *Benefits*: Ensures the tool can handle high workloads and scale as needed. +- **REST API**: Provides an API for integrating with other systems and managing CROWler's operations programmatically. + - *Benefits*: Facilitates automation and integration with existing data processing pipelines. -- **(Feature 13) Security and Privacy** +- **Bulk Upload Tools**: Supports bulk uploading of URLs and data for processing. + - *Benefits*: Streamlines the process of adding multiple sources for crawling and scraping. - - **Service Scout**: Provides features equivalent to Nmap for security auditing. - - *Benefits*: Helps identify security vulnerabilities and ensures compliance with security standards. +## (Features Group 9) Comprehensive Ruleset System - - **Data Anonymization**: Supports techniques for anonymizing collected data to ensure privacy compliance. - - *Benefits*: Protects sensitive information and complies with data protection regulations. +- **Ruleset Architecture**: Supports a comprehensive ruleset system for defining custom crawling and scraping rules. Specifically, four types of rules: + - **Crawling Rules**: Define how to navigate a website. + - **Scrape Rules**: Define what to extract from a page. + - **Action Rules**: Define what to do on a page. + - **Detection Rules**: Define what (and how) to detect technologies and vulnerabilities. + - Ruleset architecture is declarative (can be expressed in both YAML and JSON) and can be shared across instances and updated dynamically. + - Ruleset architecture can be extended with JavaScript plugins. + - *Benefits*: Allows users to define complex logic for data extraction and processing, site navigation, and technology detection. -- **(Feature 14) Error Handling and Logging** +- **Ruleset Management**: Provides tools for managing and sharing rulesets across different instances. + - *Benefits*: Enhances reusability and collaboration among users. - - **Robust Error Handling**: Provides mechanisms to handle errors and retry operations automatically. - - *Benefits*: Improves reliability by ensuring that transient issues do not disrupt the crawling process. +## (Features Group 10) Plugin Support - - **Detailed Logging**: Configurable logging options to capture detailed operational logs for troubleshooting. - - *Benefits*: Aids in diagnosing issues and optimizing performance. +- **JavaScript Plugins**: Supports custom JavaScript plugins for extending functionality. + - *Benefits*: Allows customization and enhancement of CROWler's capabilities to meet specific needs. -- **(Feature 15) User Interface and Console** +## (Features Group 11) Data Storage and Management - - **Admin Console**: Offers an admin interface for monitoring and managing CROWler operations. - - *Benefits*: Provides an intuitive interface for users to oversee and control crawling activities. +- **Database Integration**: Stores collected data in a structured format in databases like PostgreSQL. + - *Benefits*: Ensures organized and easily retrievable data for analysis. -- **(Feature 16) Cybersecurity Features** +- **File Storage Options**: Configurable storage for images and other media files. + - *Benefits*: Enables efficient handling of large volumes of media content. - - **Security Testing**: Supports fuzzing and scanning capabilities for identifying vulnerabilities in web applications. - - *Benefits*: Helps improve the security posture of web assets. +## (Features Group 12) Configuration and Scalability - - **Compliance Checks**: Includes features for checking compliance with security standards and best practices. **Note**: This feature requires additional configuration and purchase of specific rulesets. - - *Benefits*: Ensures adherence to security guidelines and regulations. +- **Configurable Environment**: Supports detailed configuration options for customizing crawling and scraping behavior. + - *Benefits*: Provides flexibility to adapt to different use cases and environments. - - **Security Automation**: Enables automation of security testing and monitoring tasks. - - *Benefits*: Enhances efficiency and accuracy in security assessments. +- **Scalability**: Supports multiple workers and Selenium drivers to handle large-scale operations. + - *Benefits*: Ensures the tool can handle high workloads and scale as needed. - - **Native Support for Third-Party Security Services**: Integration with security services like Shodan, VirusTotal, and others. - - *Benefits*: Provides access to external security intelligence and threat data. +## (Features Group 13) Security and Privacy - - **Full Suite of TLS Fingerprinting**: Provides comprehensive TLS fingerprinting capabilities, including JA3, JA4, and others. - - *Benefits*: Helps identify the underlying technologies and configurations of web servers. +- **Service Scout**: Provides features equivalent to Nmap for security auditing. + - *Benefits*: Helps identify security vulnerabilities and ensures compliance with security standards. -- **(Feature 17) Containerization** +- **Data Anonymization**: Supports techniques for anonymizing collected data to ensure privacy compliance. + - *Benefits*: Protects sensitive information and complies with data protection regulations. - - **Docker Support**: Can be easily containerized and deployed in containerized environments. - - *Benefits*: Simplifies deployment and management in container orchestration platforms. +## (Features Group 14) Error Handling and Logging + +- **Robust Error Handling**: Provides mechanisms to handle errors and retry operations automatically. + - *Benefits*: Improves reliability by ensuring that transient issues do not disrupt the crawling process. + +- **Detailed Logging**: Configurable logging options to capture detailed operational logs for troubleshooting. + - *Benefits*: Aids in diagnosing issues and optimizing performance. + +## (Features Group 15) User Interface and Console + +- **Admin Console**: Offers an admin interface for monitoring and managing CROWler operations. + - *Benefits*: Provides an intuitive interface for users to oversee and control crawling activities. + +## (Features Group 16) Cybersecurity Features + +- **Security Testing**: Supports fuzzing and scanning capabilities for identifying vulnerabilities in web applications. + - *Benefits*: Helps improve the security posture of web assets. + +- **Compliance Checks**: Includes features for checking compliance with security standards and best practices. **Note**: This feature requires additional configuration and purchase of specific rulesets. + - *Benefits*: Ensures adherence to security guidelines and regulations. + +- **Security Automation**: Enables automation of security testing and monitoring tasks. + - *Benefits*: Enhances efficiency and accuracy in security assessments. + +- **Native Support for Third-Party Security Services**: Integration with security services like Shodan, VirusTotal, and others. + - *Benefits*: Provides access to external security intelligence and threat data. + +- **Full Suite of TLS Fingerprinting**: Provides comprehensive TLS fingerprinting capabilities, including JA3, JA4, and others. + - *Benefits*: Helps identify the underlying technologies and configurations of web servers. + +## (Features Group 17) Containerization + +- **Docker Support**: Can be easily containerized and deployed in containerized environments. + - *Benefits*: Simplifies deployment and management in container orchestration platforms. --- diff --git a/rules/AcceptCookies-ruleset.json b/rules/AcceptCookies-ruleset.json index c1008c8..7d47a4f 100644 --- a/rules/AcceptCookies-ruleset.json +++ b/rules/AcceptCookies-ruleset.json @@ -22,7 +22,7 @@ "selector": "//button[contains(text(), 'Accept') or contains(text(), 'Akzeptieren') or contains(text(), 'Aceptar') or contains(text(), 'Accepter') or contains(text(), 'Accetta')]" }, { - "selector_type": "class", + "selector_type": "class_name", "selector": "disclaimerOK" }, { diff --git a/schemas/crowler-config-schema.json b/schemas/crowler-config-schema.json index a258e3f..924b6a7 100644 --- a/schemas/crowler-config-schema.json +++ b/schemas/crowler-config-schema.json @@ -1,810 +1,949 @@ { - "$schema": "http://json-schema.org/draft-07/schema#", - "type": "object", - "properties": { - "remote": { - "type": "object", - "properties": { - "host": { - "type": "string", - "description": "This is the host that the CROWler will use to fetch its configuration." - }, - "path": { - "type": "string", - "description": "This is the path that the CROWler will use to fetch its configuration." - }, - "port": { - "type": "integer", - "description": "This is the port that the CROWler will use to fetch its configuration." - }, - "region": { - "type": "string", - "description": "This is the region that the CROWler will use to fetch its configuration. For example in case the distribution server is on an AWS S3 bucket, you can specify the region here." - }, - "token": { - "type": "string", - "description": "This is the token that the CROWler will use to connect to the distribution server to fetch its configuration." - }, - "secret": { - "type": "string", - "description": "This is the secret that the CROWler will use to connect to the distribution server to fetch its configuration." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the CROWler to fetch its configuration." - }, - "type": { - "type": "string", - "description": "This is the type of the distribution server that the CROWler will use to fetch its configuration. For example, s3 or http." - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode that the CROWler will use to connect to the distribution server to fetch its configuration." - } - }, - "description": "This is the configuration section to tell the CROWler its actual configuration has to be fetched remotely from a distribution server. If you use this section, then do not populate the other configuration sections as they will be ignored. The CROWler will fetch its configuration from the remote server and use it to start the engine.", - "required": [ - "host", - "path", - "port", - "type" - ] + "$schema": "http://json-schema.org/draft-07/schema#", + "type": "object", + "properties": { + "remote": { + "type": "object", + "properties": { + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$", + "description": "This is the host that the CROWler will use to fetch its configuration." }, - "database": { - "type": "object", - "properties": { - "type": { - "type": "string", - "description": "This is the type of the database that the CROWler will use to store data. For example, postgres." - }, - "host": { - "type": "string", - "description": "This is the host that the CROWler will use to connect to the database." - }, - "port": { - "type": "integer", - "description": "This is the port that the CROWler will use to connect to the database." - }, - "user": { - "type": "string", - "description": "This is the user that the CROWler will use to connect to the database." - }, - "password": { - "type": "string" - }, - "dbname": { - "type": "string", - "description": "This is the name of the database that the CROWler will use to store data." - }, - "retry_time": { - "type": "integer", - "description": "This is the time in seconds that the CROWler will wait before retrying a database connection." - }, - "ping_time": { - "type": "integer", - "description": "This is the time in seconds that the CROWler will wait before pinging the database to check if it is still alive." - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode that the CROWler will use to connect to the database. Use 'enable' to enable the ssl mode connection to the DB. (default is 'disable')." - }, - "optimize_for": { - "type": "string", - "description": "This option allows the user to optimize the database for a specific use case. For example, if the user is doing more write operations than query, then use the value 'write'. If the user is doing more query operations than write, then use the value 'query'. If unsure leave it empty." - }, - "max_conns": { - "type": "integer", - "description": "This is the maximum number of connections that the CROWler will use to connect to the database." - }, - "max_idle_conns": { - "type": "integer", - "description": "This is the maximum number of idle connections that the CROWler will use to connect to the database." - } + "path": { + "type": "string", + "description": "This is the path that the CROWler will use to fetch its configuration." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the CROWler will use to fetch its configuration.", + "examples": [ + 80 + ] + }, + "region": { + "type": "string", + "description": "This is the region that the CROWler will use to fetch its configuration. For example in case the distribution server is on an AWS S3 bucket, you can specify the region here." + }, + "token": { + "type": "string", + "description": "This is the token that the CROWler will use to connect to the distribution server to fetch its configuration." + }, + "secret": { + "type": "string", + "description": "This is the secret that the CROWler will use to connect to the distribution server to fetch its configuration." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the CROWler to fetch its configuration." + }, + "type": { + "type": "string", + "description": "This is the type of the distribution server that the CROWler will use to fetch its configuration. For example, s3 or http." + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode that the CROWler will use to connect to the distribution server to fetch its configuration.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + } + }, + "description": "This is the configuration section to tell the CROWler its actual configuration has to be fetched remotely from a distribution server. If you use this section, then do not populate the other configuration sections as they will be ignored. The CROWler will fetch its configuration from the remote server and use it to start the engine.", + "required": [ + "host", + "path", + "type" + ] + }, + "database": { + "type": "object", + "properties": { + "type": { + "type": "string", + "description": "This is the type of the database that the CROWler will use to store data. For example, postgres." + }, + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the host that the CROWler will use to connect to the database." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the CROWler will use to connect to the database.", + "examples": [ + 5432 + ] + }, + "user": { + "type": "string", + "description": "This is the user that the CROWler will use to connect to the database." + }, + "password": { + "type": "string" + }, + "dbname": { + "type": "string", + "description": "This is the name of the database that the CROWler will use to store data." + }, + "retry_time": { + "type": "integer", + "description": "This is the time in seconds that the CROWler will wait before retrying a database connection." + }, + "ping_time": { + "type": "integer", + "description": "This is the time in seconds that the CROWler will wait before pinging the database to check if it is still alive." + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode that the CROWler will use to connect to the database. Use 'enable' to enable the ssl mode connection to the DB. (default is 'disable').", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + }, + "optimize_for": { + "type": "string", + "description": "This option allows the user to optimize the database for a specific use case. For example, if the user is doing more write operations than query, then use the value 'write'. If the user is doing more query operations than write, then use the value 'query'. If unsure leave it empty.", + "pattern": "^(write|query|)$", + "examples": [ + "write", + "query" + ] + }, + "max_conns": { + "type": "integer", + "minimum": 25, + "description": "This is the maximum number of connections that the CROWler will use to connect to the database." + }, + "max_idle_conns": { + "type": "integer", + "minimum": 25, + "description": "This is the maximum number of idle connections that the CROWler will use to connect to the database." + } + }, + "description": "This is the configuration for the database that the CROWler will use to store data.", + "additionalProperties": false, + "required": [ + "type", + "host", + "user", + "password", + "dbname" + ] + }, + "crawler": { + "type": "object", + "properties": { + "workers": { + "type": "integer", + "description": "This is the number of workers that the CROWler will use to crawl websites. Minimum number is 3 per each Source if you have network discovery enabled or 1 per each source if you are doing crawling only. Increase the number of workers to scale up the CROWler engine vertically." + }, + "interval": { + "type": "string", + "description": "This is the interval at which the CROWler will crawl websites. It is the interval at which the CROWler will crawl websites, values are in seconds, e.g. '3' means 3 seconds. For the interval you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the CROWler. It is the maximum amount of time that the CROWler will wait for a website to respond." + }, + "maintenance": { + "type": "integer", + "description": "This is the maintenance interval for the CROWler. It is the interval at which the CROWler will perform automatic maintenance tasks." + }, + "source_screenshot": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to take a screenshot of the source website. This is useful for debugging purposes." + }, + "full_site_screenshot": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to take a screenshot of the full website. This is useful for debugging purposes." + }, + "max_depth": { + "type": "integer", + "description": "This is the maximum depth that the CROWler will crawl websites." + }, + "max_sources": { + "type": "integer", + "description": "This is the maximum number of sources that a single instance of the CROWler's engine will fetch atomically to enqueue and crawl." + }, + "delay": { + "type": "string", + "description": "This is the delay between requests that the CROWler will use to crawl websites. It is the delay between requests that the CROWler will use to crawl websites. For delay you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." + }, + "browsing_mode": { + "type": "string", + "description": "This is the browsing mode that the CROWler will use to crawl websites. For example, recursive, human, or fuzzing." + }, + "max_retries": { + "type": "integer", + "description": "This is the maximum number of times that the CROWler will retry a request to a website. If the CROWler is unable to fetch a website after this number of retries, it will move on to the next website." + }, + "max_requests": { + "type": "integer", + "description": "This is the maximum number of requests that the CROWler will send to a website. If the CROWler sends this number of requests to a website and is unable to fetch the website, it will move on to the next website." + }, + "collect_html": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect the HTML of a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source." + }, + "collect_images": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect images from a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source" + }, + "collect_files": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect files from a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source" + }, + "collect_content": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect the text content of a website. This is also useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source" + }, + "collect_keywords": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect the keywords of a website. This is also useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source. Keywords and metadata are used in searches, so we recommend enabling this option." + }, + "collect_metatags": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to collect the metatags of a website. This is useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source. Keywords and metadata are used in searches, so we recommend enabling this option." + }, + "control": { + "type": "object", + "properties": { + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the host that the CROWler will use to allow connections to the control API." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the CROWler will use to allow connections to the control API.", + "examples": [ + 8081 + ] + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode switch for the control API. Use 'enable' to make the control API use HTTPS.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + }, + "cert_file": { + "type": "string", + "description": "This is the certificate file for the control API HTTPS protocol." + }, + "key_file": { + "type": "string", + "description": "This is the key file for the control API HTTPS certificates." + }, + "rate_limit": { + "type": "string", + "description": "This is the rate limit for the control API. It is the maximum number of requests that the CROWler will accept per second. You can use the ExprTerpreter language to set the rate limit. The format of this parameter is 'query_per_second, total_query' (for example: '100,100')." + }, + "readheader_timeout": { + "type": "integer", + "description": "This is the readheader timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." }, - "description": "This is the configuration for the database that the CROWler will use to store data.", - "extraProperties": false, - "required": [ - "type", - "host", - "port", - "user", - "password", - "dbname" + "write_timeout": { + "type": "integer", + "description": "This is the write timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." + } + }, + "additionalProperties": false, + "description": "This is the crawler's Control API configuration, it's an internal API used by tools like healthCheck etc to control the status of the CROWler. It has nothing to do with the General API `api` which is used to interact with the CROWler, make requests and post Sources. This section is the configuration for the control API that the CROWler Engine will use to allow connections to the control API. The COntrol API is part of the CROWler Engine." + } + }, + "additionalProperties": false, + "description": "This is the configuration for the CROWler engine. It is the configuration for the CROWler engine that the CROWler will use to crawl websites." + }, + "api": { + "type": "object", + "properties": { + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the host that the API will use to communicate with the outside world. Use 0.0.0.0 to make the API accessible from any IP address." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the API will use to communicate with the outside world.", + "examples": [ + 8080 + ] + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the API. It is the maximum amount of time that the CROWler will wait for the API to respond." + }, + "content_search": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to search also in the content field of a web object in the search results. This is useful for searching for every possible details of a web object, however will reduce performance quite a bit." + }, + "return_content": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to return the web object content of a page in the search results. To improve performance, you can disable this option." + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode switch for the API. Use 'enable' to make the API use HTTPS.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + }, + "cert_file": { + "type": "string", + "description": "This is the certificate file for the API HTTPS protocol." + }, + "key_file": { + "type": "string", + "description": "This is the key file for the API HTTPS certificates." + }, + "rate_limit": { + "type": "string", + "description": "This is the rate limit for the API. It is the maximum number of requests that the CROWler will accept per second. You can use the ExprTerpreter language to set the rate limit." + }, + "enable_console": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to enable the admin console via the API. In other words, you'll get more endpoints to manage the CROWler via the Search API instead of local commands." + }, + "return_404": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to return 404 status code if a query has no results." + } + }, + "additionalProperties": false, + "description": "This is the configuration for the General API (it has no effect on the CROWler's engine, nor it has anything to do with the Engine's Control API). It is the configuration for the API that the CROWler will use to communicate with the outside world, to allow users to make queries, post Sources, check the status of the crawling activities etc.", + "required": [ + "host", + "timeout" + ] + }, + "selenium": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the name of the VDI image. This is not a network name, so you can pick whatever makes sense for your business logic. This name can be used in a Source Configuration, to ensure the CROWler will use that specific VDI image to crawl the website." + }, + "location": { + "type": "string", + "description": "This is the location of the VDI image." + }, + "path": { + "type": "string", + "description": "This is the path to the selenium driver (IF LOCAL). It is the path to the selenium driver that the CROWler will use to crawl websites. (deprecated)" + }, + "driver_path": { + "type": "string", + "description": "This is the path to the selenium driver (IF REMOTE). It is the path to the selenium driver that the CROWler will use to crawl websites. (deprecated)" + }, + "type": { + "type": "string", + "description": "This is the type of selenium driver that the CROWler will use to crawl websites. For example, chrome or firefox." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the selenium driver will use to connect to the CROWler. It is the port that the selenium driver will use to connect to the CROWler." + }, + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the VDI host name or IP that the CROWler will use to connect to the VDI. It is the host that will be used to fetch web pages and that runs Selenium, RBee etc. For example, localhost. This is also the recommended way to use and connect to a VDI (in other words, don't try to run selenium, Rbee etc. locally, use a container for the VDI)." + }, + "headless": { + "type": "boolean", + "description": "This is a flag that tells the selenium driver to run in headless mode. This is useful for running the selenium driver in a headless environment. It's generally NOT recommended to enable headless mode for the selenium driver. (don't use headless unless you know what you're doing, headless browsing is mostly blocked these days!)" + }, + "use_service": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to access Selenium as service. (deprecated)" + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode that the selenium driver will use to connect to the CROWler. It is the sslmode that the selenium driver will use to connect to the CROWler.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" ] + }, + "download_path": { + "type": "string", + "description": "This is the temporary download path for the VDI. It is the local path where the VDI will download files. This is useful for downloading files from websites (like pdf or zip etc.). The CROWler will use this path to temporarily store the downloaded files (before moving them to the storage files area)." + } }, - "crawler": { - "type": "object", - "properties": { - "workers": { - "type": "integer", - "description": "This is the number of workers that the CROWler will use to crawl websites. Minimum number is 3 per each Source if you have network discovery enabled or 1 per each source if you are doing crawling only. Increase the number of workers to scale up the CROWler engine vertically." - }, - "interval": { - "type": "string", - "description": "This is the interval at which the CROWler will crawl websites. It is the interval at which the CROWler will crawl websites, values are in seconds, e.g. '3' means 3 seconds. For the interval you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the CROWler. It is the maximum amount of time that the CROWler will wait for a website to respond." - }, - "maintenance": { - "type": "integer", - "description": "This is the maintenance interval for the CROWler. It is the interval at which the CROWler will perform automatic maintenance tasks." - }, - "source_screenshot": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to take a screenshot of the source website. This is useful for debugging purposes." - }, - "full_site_screenshot": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to take a screenshot of the full website. This is useful for debugging purposes." - }, - "max_depth": { - "type": "integer", - "description": "This is the maximum depth that the CROWler will crawl websites." - }, - "max_sources": { - "type": "integer", - "description": "This is the maximum number of sources that a single instance of the CROWler's engine will fetch atomically to enqueue and crawl." - }, - "delay": { - "type": "string", - "description": "This is the delay between requests that the CROWler will use to crawl websites. It is the delay between requests that the CROWler will use to crawl websites. For delay you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." - }, - "browsing_mode": { - "type": "string", - "description": "This is the browsing mode that the CROWler will use to crawl websites. For example, recursive, human, or fuzzing." - }, - "max_retries": { - "type": "integer", - "description": "This is the maximum number of times that the CROWler will retry a request to a website. If the CROWler is unable to fetch a website after this number of retries, it will move on to the next website." - }, - "max_requests": { - "type": "integer", - "description": "This is the maximum number of requests that the CROWler will send to a website. If the CROWler sends this number of requests to a website and is unable to fetch the website, it will move on to the next website." - }, - "collect_html": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect the HTML of a website. This is useful for debugging purposes." - }, - "collect_images": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect images from a website. This is useful for debugging purposes." - }, - "collect_files": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect files from a website. This is useful for debugging purposes." - }, - "collect_content": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect the text content of a website. This is useful for AI datasets creation and knowledge bases." - }, - "collect_keywords": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect the keywords of a website. This is useful for AI datasets creation and knowledge bases." - }, - "collect_metatags": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to collect the metatags of a website. This is useful for AI datasets creation and knowledge bases." - }, - "control": { - "type": "object", - "properties": { - "host": { - "type": "string", - "description": "This is the host that the CROWler will use to allow connections to the control API." - }, - "port": { - "type": "integer", - "description": "This is the port that the CROWler will use to allow connections to the control API." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode switch for the control API. Use 'enable' to make the control API use HTTPS." - }, - "cert_file": { - "type": "string", - "description": "This is the certificate file for the control API HTTPS protocol." - }, - "key_file": { - "type": "string", - "description": "This is the key file for the control API HTTPS certificates." - }, - "rate_limit": { - "type": "string", - "description": "This is the rate limit for the control API. It is the maximum number of requests that the CROWler will accept per second. You can use the ExprTerpreter language to set the rate limit. The format of this parameter is 'query_per_second, total_query' (for example: '100,100')." - }, - "readheader_timeout": { - "type": "integer", - "description": "This is the readheader timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." - }, - "write_timeout": { - "type": "integer", - "description": "This is the write timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." - } - }, - "description": "This is the configuration for the control API. It is the configuration for the control API that the CROWler will use to allow connections to the control API." - } + "required": [ + "type", + "host" + ], + "additionalProperties": false, + "description": "This is the configuration for the selenium driver. It is the configuration for the selenium driver that the CROWler will use to crawl websites. To scale the CROWler web crawling capabilities, you can add multiple selenium drivers in the array." + } + }, + "image_storage": { + "type": "object", + "properties": { + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the remote host for the image storage request." + }, + "path": { + "type": "string", + "description": "This is the path to the image storage. It is the path to the storage that the CROWler will use to store images. if the image storage is local, this is the path to the local directory where the images will be stored. If the image storage is remote, this is the path to the remote storage where the images will be stored." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the remote port for the image storage request." + }, + "region": { + "type": "string", + "description": "This is the region for the image storage request (for example for AWS s3 buckets)." + }, + "token": { + "type": "string", + "description": "This is the token for the image storage request for remote storage." + }, + "secret": { + "type": "string", + "description": "This is the secret for the image storage request for remote storage." + }, + "timeout": { + "type": "integer", + "description": "This is the remote request timeout in seconds." + }, + "type": { + "type": "string", + "description": "This is the type of storage that the CROWler will use to store images. For example, s3, http or local (local is the default type)." + }, + "sslmode": { + "type": "string", + "description": "This is the ssl mode for the image storage request for remote storage. Use enable to force https over http.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + } + }, + "additionalProperties": false, + "description": "This is the configuration for the image storage. It is the configuration for the storage that the CROWler will use to store images.", + "required": [ + "path", + "type" + ] + }, + "file_storage": { + "type": "object", + "properties": { + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the remote host for the file storage request." + }, + "path": { + "type": "string", + "description": "This is the path to the file storage. It is the path to the storage that the CROWler will use to store files. if the file storage is local, this is the path to the local directory where the files will be stored. If the file storage is remote, this is the path to the remote storage where the files will be stored." + }, + "port": { + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "This is the remote port for the file storage request." + }, + "region": { + "type": "string", + "description": "This is the region for the file storage request (for example for AWS s3 buckets)." + }, + "token": { + "type": "string", + "description": "This is the token for the file storage request for remote storage." + }, + "secret": { + "type": "string", + "description": "This is the secret for the file storage request for remote storage." + }, + "timeout": { + "type": "integer", + "description": "This is the remote request timeout in seconds." + }, + "type": { + "type": "string", + "description": "This is the type of storage that the CROWler will use to store files. For example, s3, http or local (local is the default type)." + }, + "sslmode": { + "type": "string", + "description": "This is the ssl mode for the file storage request for remote storage. Use enable to force https over http.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + } + }, + "additionalProperties": false, + "description": "This is the configuration for the file storage. File storage will be used for web object content storage.", + "required": [ + "path", + "type" + ] + }, + "network_info": { + "type": "object", + "properties": { + "dns": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use DNS techniques. This is useful for detecting the IP address of a domain." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the DNS database. It is the maximum amount of time that the CROWler will wait for the DNS database to respond." + }, + "rate_limit": { + "type": "string", + "description": "This is the rate limit for the DNS database. It is the maximum number of requests that the CROWler will send to the DNS database per second. You can use the ExprTerpreter language to set the rate limit." } + }, + "additionalProperties": false, + "description": "This is the configuration for the DNS data collection. It is the configuration for the DNS data collection that the CROWler will use to detect the IP address of a domain, subdomains etc.", + "required": [ + "enabled" + ] }, - "api": { - "type": "object", - "properties": { - "host": { - "type": "string", - "description": "This is the host that the API will use to communicate with the outside world. Use 0.0.0.0 to make the API accessible from any IP address." - }, - "port": { - "type": "integer", - "description": "This is the port that the API will use to communicate with the outside world." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the API. It is the maximum amount of time that the CROWler will wait for the API to respond." - }, - "content_search": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to search also in the content field of a web object in the search results. This is useful for searching for every possible details of a web object, however will reduce performance quite a bit." - }, - "return_content": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to return the web object content of a page in the search results. To improve performance, you can disable this option." - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode switch for the API. Use 'enable' to make the API use HTTPS." - }, - "cert_file": { - "type": "string", - "description": "This is the certificate file for the API HTTPS protocol." - }, - "key_file": { - "type": "string", - "description": "This is the key file for the API HTTPS certificates." - }, - "rate_limit": { - "type": "string", - "description": "This is the rate limit for the API. It is the maximum number of requests that the CROWler will accept per second. You can use the ExprTerpreter language to set the rate limit." - }, - "enable_console": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to enable the admin console via the API. In other words, you'll get more endpoints to manage the CROWler via the Search API instead of local commands." - }, - "return_404": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to return 404 status code if a query has no results." - } + "whois": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use whois techniques. This is useful for detecting the owner of a domain." }, - "description": "This is the configuration for the API (has no effect on the engine). It is the configuration for the API that the CROWler will use to communicate with the outside world.", - "required": [ - "host", - "port", - "timeout" - ] + "timeout": { + "type": "integer", + "description": "This is the timeout for the whois database. It is the maximum amount of time that the CROWler will wait for the whois database to respond." + }, + "rate_limit": { + "type": "string", + "description": "This is the rate limit for the whois database. It is the maximum number of requests that the CROWler will send to the whois database per second. You can use the ExprTerpreter language to set the rate limit." + } + }, + "description": "This is the configuration for the whois data collection. It is the configuration for the whois data collection that the CROWler will use to detect the owner of a domain.", + "additionalProperties": false, + "required": [ + "enabled" + ] }, - "selenium": { - "type": "array", - "items": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "This is the name of the VDI image. This is not a network name, so you can pick whatever makes sense for your business logic. This name can be used in a Source Configuration, to ensure the CROWler will use that specific VDI image to crawl the website." - }, - "location": { - "type": "string", - "description": "This is the location of the VDI image." - }, - "path": { - "type": "string", - "description": "This is the path to the selenium driver (IF LOCAL). It is the path to the selenium driver that the CROWler will use to crawl websites. (deprecated)" - }, - "driver_path": { - "type": "string", - "description": "This is the path to the selenium driver (IF REMOTE). It is the path to the selenium driver that the CROWler will use to crawl websites. (deprecated)" - }, - "type": { - "type": "string", - "description": "This is the type of selenium driver that the CROWler will use to crawl websites. For example, chrome or firefox." - }, - "port": { - "type": "integer", - "description": "This is the port that the selenium driver will use to connect to the CROWler. It is the port that the selenium driver will use to connect to the CROWler." - }, - "host": { - "type": "string", - "description": "This is the VDI host name or IP that the CROWler will use to connect to the VDI. It is the host that will be used to fetch web pages and that runs Selenium, RBee etc. For example, localhost. This is also the recommended way to use and connect to a VDI (in other words, don't try to run selenium, Rbee etc. locally, use a container for the VDI)." - }, - "headless": { - "type": "boolean", - "description": "This is a flag that tells the selenium driver to run in headless mode. This is useful for running the selenium driver in a headless environment. It's generally NOT recommended to enable headless mode for the selenium driver. (don't use headless unless you know what you're doing, headless browsing is mostly blocked these days!)" - }, - "use_service": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to access Selenium as service. (deprecated)" - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode that the selenium driver will use to connect to the CROWler. It is the sslmode that the selenium driver will use to connect to the CROWler." - }, - "download_path": { - "type": "string", - "description": "This is the temporary download path for the VDI. It is the local path where the VDI will download files. This is useful for downloading files from websites (like pdf or zip etc.). The CROWler will use this path to temporarily store the downloaded files (before moving them to the storage files area)." - } - }, - "required": [ - "type", - "path", - "port", - "host" - ], - "additionalProperties": false, - "description": "This is the configuration for the selenium driver. It is the configuration for the selenium driver that the CROWler will use to crawl websites. To scale the CROWler web crawling capabilities, you can add multiple selenium drivers in the array." + "netlookup": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use netlookup techniques. This is useful for detecting the network information of a host." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the netlookup database. It is the maximum amount of time that the CROWler will wait for the netlookup database to respond." + }, + "rate_limit": { + "type": "string", + "description": "This is the rate limit for the netlookup database. It is the maximum number of requests that the CROWler will send to the netlookup database per second. You can use the ExprTerpreter language to set the rate limit." } + }, + "additionalProperties": false, + "description": "This is the configuration for the netlookup data collection. It is the configuration for the netlookup data collection that the CROWler will use to detect the network information of a host.", + "required": [ + "enabled" + ] }, - "image_storage": { - "type": "object", - "properties": { - "host": { - "type": "string", - "description": "This is the remote host for the image storage request." - }, - "path": { - "type": "string", - "description": "This is the path to the image storage. It is the path to the storage that the CROWler will use to store images. if the image storage is local, this is the path to the local directory where the images will be stored. If the image storage is remote, this is the path to the remote storage where the images will be stored." - }, - "port": { - "type": "integer", - "description": "This is the remote port for the image storage request." - }, - "region": { - "type": "string", - "description": "This is the region for the image storage request (for example for AWS s3 buckets)." - }, - "token": { - "type": "string", - "description": "This is the token for the image storage request for remote storage." - }, - "secret": { - "type": "string", - "description": "This is the secret for the image storage request for remote storage." - }, - "timeout": { - "type": "integer", - "description": "This is the remote request timeout in seconds." - }, - "type": { - "type": "string", - "description": "This is the type of storage that the CROWler will use to store images. For example, s3, http or local (local is the default type)." - }, - "sslmode": { - "type": "string", - "description": "This is the ssl mode for the image storage request for remote storage. Use enable to force https over http." - } + "geo_localization": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use geolocation techniques. This is useful for detecting the location of a host." }, - "description": "This is the configuration for the image storage. It is the configuration for the storage that the CROWler will use to store images.", - "required": [ - "path", - "type" - ] + "path": { + "type": "string", + "description": "This is the path to the geolocation database. It is the path to the database that the CROWler will use to determine the location of a host." + }, + "type": { + "type": "string", + "description": "This is the type of geolocation database that the CROWler will use. It is the type of database that the CROWler will use to determine the location of a host. For example maxmind or ip2location" + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the geolocation database. It is the maximum amount of time that the CROWler will wait for the geolocation database to respond." + }, + "api_key": { + "type": "string", + "description": "This is the API key for the geolocation database. It is the API key that the CROWler will use to connect to the geolocation database." + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode that the CROWler will use to connect to the geolocation database.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + } + }, + "additionalProperties": false, + "description": "This is the configuration for the geolocation data collection. It is the configuration for the geolocation data collection that the CROWler will use to detect the location of a host.", + "required": [ + "enabled", + "path" + ] }, - "file_storage": { - "type": "object", - "properties": { + "service_scout": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use service scanning techniques. This is useful for detecting services that are running on a host." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout for the scan. It is the maximum amount of time that the CROWler will wait for a host to respond to a scan." + }, + "idle_scan": { + "type": "object", + "description": "This is the configuration for the idle scan.", + "properties": { "host": { - "type": "string", - "description": "This is the remote host for the file storage request." - }, - "path": { - "type": "string", - "description": "This is the path to the file storage. It is the path to the storage that the CROWler will use to store files. if the file storage is local, this is the path to the local directory where the files will be stored. If the file storage is remote, this is the path to the remote storage where the files will be stored." + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "Host FQDN or IP address." }, "port": { - "type": "integer", - "description": "This is the remote port for the file storage request." - }, - "region": { - "type": "string", - "description": "This is the region for the file storage request (for example for AWS s3 buckets)." - }, - "token": { - "type": "string", - "description": "This is the token for the file storage request for remote storage." - }, - "secret": { - "type": "string", - "description": "This is the secret for the file storage request for remote storage." - }, - "timeout": { - "type": "integer", - "description": "This is the remote request timeout in seconds." - }, - "type": { - "type": "string", - "description": "This is the type of storage that the CROWler will use to store files. For example, s3, http or local (local is the default type)." - }, - "sslmode": { - "type": "string", - "description": "This is the ssl mode for the file storage request for remote storage. Use enable to force https over http." + "type": "integer", + "minimum": 1, + "maximum": 65535, + "description": "Port number." } + }, + "additionalProperties": false }, - "description": "This is the configuration for the file storage. File storage will be used for web object content storage.", - "required": [ - "path", - "type" - ] - }, - "network_info": { - "type": "object", - "properties": { - "dns": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use DNS techniques. This is useful for detecting the IP address of a domain." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the DNS database. It is the maximum amount of time that the CROWler will wait for the DNS database to respond." - }, - "rate_limit": { - "type": "string", - "description": "This is the rate limit for the DNS database. It is the maximum number of requests that the CROWler will send to the DNS database per second. You can use the ExprTerpreter language to set the rate limit." - } - }, - "required": [ - "enabled" - ] - }, - "whois": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use whois techniques. This is useful for detecting the owner of a domain." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the whois database. It is the maximum amount of time that the CROWler will wait for the whois database to respond." - }, - "rate_limit": { - "type": "string", - "description": "This is the rate limit for the whois database. It is the maximum number of requests that the CROWler will send to the whois database per second. You can use the ExprTerpreter language to set the rate limit." - } - }, - "required": [ - "enabled" - ] - }, - "netlookup": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use netlookup techniques. This is useful for detecting the network information of a host." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the netlookup database. It is the maximum amount of time that the CROWler will wait for the netlookup database to respond." - }, - "rate_limit": { - "type": "string", - "description": "This is the rate limit for the netlookup database. It is the maximum number of requests that the CROWler will send to the netlookup database per second. You can use the ExprTerpreter language to set the rate limit." - } - }, - "required": [ - "enabled" - ] - }, - "geo_localization": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use geolocation techniques. This is useful for detecting the location of a host." - }, - "path": { - "type": "string", - "description": "This is the path to the geolocation database. It is the path to the database that the CROWler will use to determine the location of a host." - }, - "type": { - "type": "string", - "description": "This is the type of geolocation database that the CROWler will use. It is the type of database that the CROWler will use to determine the location of a host. For example maxmind or ip2location" - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the geolocation database. It is the maximum amount of time that the CROWler will wait for the geolocation database to respond." - }, - "api_key": { - "type": "string", - "description": "This is the API key for the geolocation database. It is the API key that the CROWler will use to connect to the geolocation database." - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode for the geolocation database. It is the sslmode that the CROWler will use to connect to the geolocation database." - } - }, - "required": [ - "enabled", - "path" - ] - }, - "service_scout": { - "type": "object", - "properties": { - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use service scanning techniques. This is useful for detecting services that are running on a host." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout for the scan. It is the maximum amount of time that the CROWler will wait for a host to respond to a scan." - }, - "idle_scan": { - "type": "object", - "description": "This is the configuration for the idle scan.", - "properties": { - "host": { - "type": "string", - "description": "Host FQDN or IP address." - }, - "port": { - "type": "integer", - "description": "Port number." - } - } - }, - "ping_scan": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use ping scanning techniques. This is useful for detecting hosts that are alive." - }, - "connect_scan": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use connect scanning techniques. This is useful for detecting services that are running on a host." - }, - "syn_scan": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use SYN scanning techniques. This is useful for detecting services that are running on a host." - }, - "udp_scan": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use UDP scanning techniques. This is useful for detecting services that are running on a host." - }, - "no_dns_resolution": { - "type": "boolean", - "description": "This is a flag that tells the CROWler not to resolve hostnames to IP addresses. This is useful for avoiding detection by intrusion detection systems." - }, - "service_detection": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use service detection techniques. This is useful for detecting services that are running on a host." - }, - "service_db": { - "type": "string", - "description": "This is the service detection database." - }, - "os_finger_print": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use OS fingerprinting techniques. This is useful for detecting the operating system that is running on a host." - }, - "aggressive_scan": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use aggressive scanning techniques. This is useful for detecting services that are running on a host." - }, - "script_scan": { - "type": "array", - "items": { - "type": "string" - }, - "description": "This is a list of nmap scripts to run. This is particularly important when a user wants to do vulnerability scanning." - }, - "excluded_hosts": { - "type": "array", - "items": { - "type": "string" - }, - "description": "This is a list of hosts to exclude from the scan. The CROWler may encounter such hosts during its crawling activities, so this field makes it easy to define a list of hosts that it should always avoid scanning." - }, - "timing_template": { - "type": "string", - "description": "This allows the user to set the timing template for the scan. The timing template is a string that is passed to nmap to set the timing of the scan. DO not specify values using Tx, where x is a number. Instead, use just the number, e.g., '3'." - }, - "host_timeout": { - "type": "string", - "description": "This is the timeout for the scan. It is the maximum amount of time that the CROWler will wait for a host to respond to a scan." - }, - "min_rate": { - "type": "string", - "description": "This is the minimum rate at which the CROWler will scan hosts. It is the minimum number of packets that the CROWler will send to a host per second." - }, - "max_retries": { - "type": "integer", - "description": "This is the maximum number of times that the CROWler will retry a scan on a host. If the CROWler is unable to scan a host after this number of retries, it will move on to the next host." - }, - "source_port": { - "type": "integer", - "description": "This is the source port that the CROWler will use for scanning. It is the port that the CROWler will use to send packets to hosts." - }, - "interface": { - "type": "string", - "description": "This is the interface that the CROWler will use for scanning. It is the network interface that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." - }, - "spoof_ip": { - "type": "string", - "description": "This is the IP address that the CROWler will use to spoof its identity. It is the IP address that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." - }, - "randomize_hosts": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to randomize the order in which it scans hosts. This is useful for avoiding detection by intrusion detection systems." - }, - "data_length": { - "type": "integer", - "description": "This is the length of the data that the CROWler will send to hosts. It is the length of the data that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." - }, - "delay": { - "type": "string", - "description": "This is the delay between packets that the CROWler will use for scanning. It is the delay between packets that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results. For the delay you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." - }, - "mtu_discovery": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use MTU discovery when scanning hosts. This is useful for avoiding detection by intrusion detection systems." - }, - "scan_flags": { - "type": "string", - "description": "This is the flags that the CROWler will use for scanning. It is the flags that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." - }, - "ip_fragment": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to fragment IP packets. This is useful for avoiding detection by intrusion detection systems." - }, - "max_port_number": { - "type": "integer", - "description": "This is the maximum port number to scan (default is 9000)." - }, - "max_parallelism": { - "type": "integer", - "description": "This is the maximum number of parallelism." - }, - "dns_servers": { - "type": "array", - "items": { - "type": "string" - }, - "description": "This is a list of custom DNS servers." - }, - "proxies": { - "type": "array", - "items": { - "type": "string" - }, - "description": "Proxies for the database connection." - } - }, - "required": [ - "enabled" - ] - } + "ping_scan": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use ping scanning techniques. This is useful for detecting hosts that are alive." }, - "description": "This is the configuration for the network information collection." - }, - "rulesets": { - "type": "array", - "items": { - "type": "object", - "properties": { - "path": { - "type": "array", - "items": { - "type": "string" - }, - "description": "This is the path that the CROWler will use to fetch the ruleset. You can use wildcard to fetch multiple rulesets. for example './rules/*.yaml'." - }, - "host": { - "type": "string", - "description": "This is the host that the CROWler will use to fetch the ruleset." - }, - "port": { - "type": "string", - "description": "This is the port that the CROWler will use to fetch the ruleset." - }, - "region": { - "type": "string", - "description": "This is the region that the CROWler will use to fetch the ruleset (for example, if you're hosting your ruleset on an AWS S3 bucket)." - }, - "token": { - "type": "string", - "description": "This is the token that the CROWler will use to connect to the distribution server to download the ruleset." - }, - "secret": { - "type": "string", - "description": "This is the secret that the CROWler will use to connect to the distribution server to download the ruleset." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout in seconds for the CROWler to fetch the ruleset." - }, - "type": { - "type": "string", - "description": "This is the type of the distribution server that the CROWler will use to fetch the ruleset. For example, s3, http or local. (local is default)" - }, - "sslmode": { - "type": "string", - "description": "This is the sslmode that the CROWler will use to connect to the distribution server to fetch the ruleset. Use 'enable' to force https over http." - }, - "refresh": { - "type": "integer", - "description": "This is the refresh interval in seconds for the CROWler to fetch the ruleset (refresh it)." - } - }, - "required": [ - "path", - "type" + "connect_scan": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use connect scanning techniques. This is useful for detecting services that are running on a host." + }, + "syn_scan": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use SYN scanning techniques. This is useful for detecting services that are running on a host." + }, + "udp_scan": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use UDP scanning techniques. This is useful for detecting services that are running on a host." + }, + "no_dns_resolution": { + "type": "boolean", + "description": "This is a flag that tells the CROWler not to resolve hostnames to IP addresses. This is useful for avoiding detection by intrusion detection systems." + }, + "service_detection": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use service detection techniques. This is useful for detecting services that are running on a host." + }, + "service_db": { + "type": "string", + "description": "This is the service detection database." + }, + "os_finger_print": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use OS fingerprinting techniques. This is useful for detecting the operating system that is running on a host." + }, + "aggressive_scan": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use aggressive scanning techniques. This is useful for detecting services that are running on a host." + }, + "script_scan": { + "type": "array", + "items": { + "type": "string", + "description": "This is a list of nmap and crowler-scanner scripts to run. This is particularly important when a user wants to do vulnerability scanning.", + "examples": [ + "http-enum", + "http-headers", + "default", + "vuln" ] + }, + "description": "This is a list of nmap scripts to run. This is particularly important when a user wants to do vulnerability scanning." }, - "description": "This is the configuration for the rulesets that the CROWler will use to crawl, interact, scrape info and detect stuff on the provided Sources to crawl." - }, - "external_detection": { - "description": "This is the configuration for the external detection services like VirusTotal, URLHaus, PhishTank, GoogleSafeBrowsing, AbuseIPDB, OpenPhish, Cuckoo, HybridAnalysis, CiscoUmbrella, AlienVault, IPVoid, Shodan, Censys, SSLLabs.", - "type": "array", - "items": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "This is the name of the external detection service. For example, VirusTotal, URLHaus, PhishTank, GoogleSafeBrowsing, AbuseIPDB, OpenPhish, Cuckoo, HybridAnalysis, CiscoUmbrella, AlienVault, IPVoid, Shodan, Censys, SSLLabs." - }, - "enabled": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use the external detection service." - }, - "api_key": { - "type": "string", - "description": "This is the API key for the external detection service." - }, - "timeout": { - "type": "integer", - "description": "This is the timeout in seconds for the external detection service." - }, - "delay": { - "type": "string", - "description": "This is the delay time (in seconds) between requests for the external detection service." - }, - "full_site": { - "type": "boolean", - "description": "This is a flag that tells the CROWler to use the external detection service only for the Source URL only or for every URLs being crawler for the Source. (default is false, to reduce traffic over 3rd party services)." - } - }, - "required": [ - "name", - "api_key" + "excluded_hosts": { + "type": "array", + "items": { + "type": "string", + "oneOf": [ + { "format": "ipv4" }, + { "format": "ipv6" }, + { "format": "hostname" } + ], + "examples": [ + "example.com", + "192.168.0.1", + "localhost", + "2001:0db8:85a3:0000:0000:8a2e:0370:7334" ] + }, + "additionalProperties": false, + "description": "This is a list of hosts to exclude from the scan. The CROWler may encounter such hosts during its crawling activities, so this field makes it easy to define a list of hosts that it should always avoid scanning." + }, + "timing_template": { + "type": "string", + "description": "This allows the user to set the timing template for the scan. The timing template is a string that is passed to nmap to set the timing of the scan. DO not specify values using Tx, where x is a number. Instead, use just the number, e.g., '3'.", + "examples": [ + "3" + ] + }, + "host_timeout": { + "type": "string", + "description": "This is the timeout for the scan. It is the maximum amount of time that the CROWler will wait for a host to respond to a scan." + }, + "min_rate": { + "type": "string", + "description": "This is the minimum rate at which the CROWler will scan hosts. It is the minimum number of packets that the CROWler will send to a host per second." + }, + "max_retries": { + "type": "integer", + "description": "This is the maximum number of times that the CROWler will retry a scan on a host. If the CROWler is unable to scan a host after this number of retries, it will move on to the next host." + }, + "source_port": { + "type": "integer", + "description": "This is the source port that the CROWler will use for scanning. It is the port that the CROWler will use to send packets to hosts." + }, + "interface": { + "type": "string", + "description": "This is the interface that the CROWler will use for scanning. It is the network interface that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." + }, + "spoof_ip": { + "type": "string", + "description": "This is the IP address that the CROWler will use to spoof its identity. It is the IP address that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." + }, + "randomize_hosts": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to randomize the order in which it scans hosts. This is useful for avoiding detection by intrusion detection systems." + }, + "data_length": { + "type": "integer", + "description": "This is the length of the data that the CROWler will send to hosts. It is the length of the data that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." + }, + "delay": { + "type": "string", + "description": "This is the delay between packets that the CROWler will use for scanning. It is the delay between packets that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results. For the delay you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." + }, + "mtu_discovery": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use MTU discovery when scanning hosts. This is useful for avoiding detection by intrusion detection systems." + }, + "scan_flags": { + "type": "string", + "description": "This is the flags that the CROWler will use for scanning. It is the flags that the CROWler will use to send packets to hosts. Use this option with a port that is behind a VPN or a proxy for better results." + }, + "ip_fragment": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to fragment IP packets. This is useful for avoiding detection by intrusion detection systems." + }, + "max_port_number": { + "type": "integer", + "description": "This is the maximum port number to scan (default is 9000)." + }, + "max_parallelism": { + "type": "integer", + "description": "This is the maximum number of parallelism." + }, + "dns_servers": { + "type": "array", + "items": { + "type": "string" + }, + "additionalProperties": false, + "description": "This is a list of custom DNS servers." + }, + "proxies": { + "type": "array", + "items": { + "type": "string" + }, + "additionalProperties": false, + "description": "Proxies for the database connection." } - }, - "os": { - "type": "string" - }, - "debug_level": { - "type": "integer" + }, + "additionalProperties": false, + "description": "This is the configuration for the service scout data collection. It is the configuration for the service scout data collection that the CROWler will use to detect services that are running on a host, network vulnerabilities, network software versions etc.", + "required": [ + "enabled" + ] } + }, + "additionalProperties": false, + "description": "This is the configuration for the network information collection." }, - "anyOf": [ - { - "required": [ - "remote" + "rulesets": { + "type": "array", + "items": { + "type": "object", + "properties": { + "path": { + "type": "array", + "items": { + "type": "string" + }, + "description": "This is the path that the CROWler will use to fetch the ruleset. You can use wildcard to fetch multiple rulesets. for example './rules/*.yaml'.", + "examples": [ + [ + "./rules/*.yaml" + ] ] - }, - { - "required": [ - "database", - "crawler", - "api", - "selenium", - "network_info" + }, + "host": { + "type": "string", + "pattern": "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])(\\.([a-zA-Z0-9\\-]+))*)|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\])|(\\${[A-Za-z_][A-Za-z0-9_]*}))$", + "description": "This is the host that the CROWler will use to fetch the ruleset." + }, + "port": { + "type": "string", + "minimum": 1, + "maximum": 65535, + "description": "This is the port that the CROWler will use to fetch the ruleset.", + "examples": [ + "443" ] - } - ] + }, + "region": { + "type": "string", + "description": "This is the region that the CROWler will use to fetch the ruleset (for example, if you're hosting your ruleset on an AWS S3 bucket)." + }, + "token": { + "type": "string", + "description": "This is the token that the CROWler will use to connect to the distribution server to download the ruleset." + }, + "secret": { + "type": "string", + "description": "This is the secret that the CROWler will use to connect to the distribution server to download the ruleset." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout in seconds for the CROWler to fetch the ruleset." + }, + "type": { + "type": "string", + "description": "This is the type of the distribution server that the CROWler will use to fetch the ruleset. For example, s3, http or local. (local is default)" + }, + "sslmode": { + "type": "string", + "description": "This is the sslmode that the CROWler will use to connect to the distribution server to fetch the ruleset. Use 'enable' to force https over http.", + "pattern": "^(enable|disable|)$", + "examples": [ + "enable", + "disable" + ] + }, + "refresh": { + "type": "integer", + "description": "This is the refresh interval in seconds for the CROWler to fetch the ruleset (refresh it)." + } + }, + "additionalProperties": false, + "required": [ + "path", + "type" + ] + }, + "description": "This is the configuration for the rulesets that the CROWler will use to crawl, interact, scrape info and detect stuff on the provided Sources to crawl." + }, + "external_detection": { + "description": "This is the configuration for the external detection services like VirusTotal, URLHaus, PhishTank, GoogleSafeBrowsing, AbuseIPDB, OpenPhish, Cuckoo, HybridAnalysis, CiscoUmbrella, AlienVault, IPVoid, Shodan, Censys, SSLLabs.", + "type": "array", + "items": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "This is the name of the external detection service. For example, VirusTotal, URLHaus, PhishTank, GoogleSafeBrowsing, AbuseIPDB, OpenPhish, Cuckoo, HybridAnalysis, CiscoUmbrella, AlienVault, IPVoid, Shodan, Censys, SSLLabs." + }, + "enabled": { + "type": "boolean", + "description": "This is a flag that tells the CROWler to use the external detection service." + }, + "api_key": { + "type": "string", + "description": "This is the API key for the external detection service." + }, + "timeout": { + "type": "integer", + "description": "This is the timeout in seconds for the external detection service." + }, + "delay": { + "type": "string", + "description": "This is the delay time (in seconds) between requests for the external detection service." + } + }, + "additionalProperties": false, + "required": [ + "name", + "api_key" + ] + } + }, + "os": { + "type": "string", + "description": "This is the operating system that the CROWler will use to run. For example, linux, windows or macos. This field is set automatically by the CROWler itself, so no need to set it manually." + }, + "debug_level": { + "type": "integer", + "description": "This is the debug level for the CROWler. It is the level of debugging that the CROWler will use to log messages. The higher the level, the more messages will be logged.", + "examples": [ + 1 + ] + } + }, + "anyOf": [ + { + "required": [ + "remote" + ] + }, + { + "required": [ + "database", + "crawler", + "api", + "selenium", + "network_info" + ] + } + ] } diff --git a/schemas/crowler-config-schema.yaml b/schemas/crowler-config-schema.yaml index fc16676..0e0b9e1 100644 --- a/schemas/crowler-config-schema.yaml +++ b/schemas/crowler-config-schema.yaml @@ -9,13 +9,18 @@ properties: properties: host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the host that the CROWler will use to fetch its configuration." path: type: "string" description: "This is the path that the CROWler will use to fetch its configuration." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the port that the CROWler will use to fetch its configuration." + examples: + - "80" region: type: "string" description: "This is the region that the CROWler will use to fetch its configuration. For example in case the distribution server is on an AWS S3 bucket, you can specify the region here." @@ -34,12 +39,15 @@ properties: sslmode: type: "string" description: "This is the sslmode that the CROWler will use to connect to the distribution server to fetch its configuration." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" description: "This is the configuration section to tell the CROWler its actual configuration has to be fetched remotely from a distribution server. If you use this section, then do not populate the other configuration sections as they will be ignored. The CROWler will fetch its configuration from the remote server and use it to start the engine." required: - - "host" - - "path" - - "port" - - "type" + - "host" + - "path" + - "type" database: type: "object" properties: @@ -48,10 +56,15 @@ properties: description: "This is the type of the database that the CROWler will use to store data. For example, postgres." host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the host that the CROWler will use to connect to the database." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the port that the CROWler will use to connect to the database." + examples: + - "5432" user: type: "string" description: "This is the user that the CROWler will use to connect to the database." @@ -69,24 +82,33 @@ properties: sslmode: type: "string" description: "This is the sslmode that the CROWler will use to connect to the database. Use 'enable' to enable the ssl mode connection to the DB. (default is 'disable')." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" optimize_for: type: "string" description: "This option allows the user to optimize the database for a specific use case. For example, if the user is doing more write operations than query, then use the value 'write'. If the user is doing more query operations than write, then use the value 'query'. If unsure leave it empty." + pattern: "^(write|query|)$" + examples: + - "write" + - "query" max_conns: type: "integer" + minimum: "25" description: "This is the maximum number of connections that the CROWler will use to connect to the database." max_idle_conns: type: "integer" + minimum: "25" description: "This is the maximum number of idle connections that the CROWler will use to connect to the database." description: "This is the configuration for the database that the CROWler will use to store data." - extraProperties: "false" + additionalProperties: "false" required: - - "type" - - "host" - - "port" - - "user" - - "password" - - "dbname" + - "type" + - "host" + - "user" + - "password" + - "dbname" crawler: type: "object" properties: @@ -128,62 +150,79 @@ properties: description: "This is the maximum number of requests that the CROWler will send to a website. If the CROWler sends this number of requests to a website and is unable to fetch the website, it will move on to the next website." collect_html: type: "boolean" - description: "This is a flag that tells the CROWler to collect the HTML of a website. This is useful for debugging purposes." + description: "This is a flag that tells the CROWler to collect the HTML of a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source." collect_images: type: "boolean" - description: "This is a flag that tells the CROWler to collect images from a website. This is useful for debugging purposes." + description: "This is a flag that tells the CROWler to collect images from a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source" collect_files: type: "boolean" - description: "This is a flag that tells the CROWler to collect files from a website. This is useful for debugging purposes." + description: "This is a flag that tells the CROWler to collect files from a website. This is also useful for debugging purposes. This collection is automatic and for each page of a Source" collect_content: type: "boolean" - description: "This is a flag that tells the CROWler to collect the text content of a website. This is useful for AI datasets creation and knowledge bases." + description: "This is a flag that tells the CROWler to collect the text content of a website. This is also useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source" collect_keywords: type: "boolean" - description: "This is a flag that tells the CROWler to collect the keywords of a website. This is useful for AI datasets creation and knowledge bases." + description: "This is a flag that tells the CROWler to collect the keywords of a website. This is also useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source. Keywords and metadata are used in searches, so we recommend enabling this option." collect_metatags: type: "boolean" - description: "This is a flag that tells the CROWler to collect the metatags of a website. This is useful for AI datasets creation and knowledge bases." + description: "This is a flag that tells the CROWler to collect the metatags of a website. This is useful for AI datasets creation and knowledge bases. This collection is automatic and for each page of a Source. Keywords and metadata are used in searches, so we recommend enabling this option." control: type: "object" properties: host: type: "string" - description: "This is the host for the control API." + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" + description: "This is the host that the CROWler will use to allow connections to the control API." port: type: "integer" - description: "This is the port for the control API." + minimum: "1" + maximum: "65535" + description: "This is the port that the CROWler will use to allow connections to the control API." + examples: + - "8081" + timeout: + type: "integer" + description: "This is the timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." sslmode: type: "string" - description: "This is the sslmode for the control API." + description: "This is the sslmode switch for the control API. Use 'enable' to make the control API use HTTPS." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" cert_file: type: "string" - description: "This is the certificate file for the control API." + description: "This is the certificate file for the control API HTTPS protocol." key_file: type: "string" - description: "This is the key file for the control API." - timeout: - type: "integer" - description: "This is the timeout for the control API." + description: "This is the key file for the control API HTTPS certificates." rate_limit: type: "string" - description: "This is the rate limit for the control API. It is the maximum number of requests that the CROWler will allow to the control API per second. The format of this parameter is 'query_per_second, total_query' (for example: '100,100')." + description: "This is the rate limit for the control API. It is the maximum number of requests that the CROWler will accept per second. You can use the ExprTerpreter language to set the rate limit. The format of this parameter is 'query_per_second, total_query' (for example: '100,100')." readheader_timeout: type: "integer" - description: "This is the readheader timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond with the header." - writeheader_timeout: + description: "This is the readheader timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." + write_timeout: type: "integer" - description: "This is the writeheader timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond with the header." - description: "This is the configuration for the control API. This API is internal and it's used to do some control operations for the CROWler engine." + description: "This is the write timeout for the control API. It is the maximum amount of time that the CROWler will wait for the control API to respond." + additionalProperties: "false" + description: "The Control API is an internal API used by tools like healthCheck etc to control the status of the CROWler. It has nothing to do with the General API `api` which is used to interact with the CROWler, make requests and post Sources. This section is the configuration for the control API that the CROWler Engine will use to allow connections to the control API. The COntrol API is part of the CROWler Engine." + additionalProperties: "false" + description: "This is the configuration for the CROWler engine. It is the configuration for the CROWler engine that the CROWler will use to crawl websites." api: type: "object" properties: host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the host that the API will use to communicate with the outside world. Use 0.0.0.0 to make the API accessible from any IP address." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the port that the API will use to communicate with the outside world." + examples: + - "8080" timeout: type: "integer" description: "This is the timeout for the API. It is the maximum amount of time that the CROWler will wait for the API to respond." @@ -196,6 +235,10 @@ properties: sslmode: type: "string" description: "This is the sslmode switch for the API. Use 'enable' to make the API use HTTPS." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" cert_file: type: "string" description: "This is the certificate file for the API HTTPS protocol." @@ -211,11 +254,11 @@ properties: return_404: type: "boolean" description: "This is a flag that tells the CROWler to return 404 status code if a query has no results." - description: "This is the configuration for the API (has no effect on the engine). It is the configuration for the API that the CROWler will use to communicate with the outside world." + additionalProperties: "false" + description: "This is the configuration for the General API (it has no effect on the CROWler's engine, nor it has anything to do with the Engine's Control API). It is the configuration for the API that the CROWler will use to communicate with the outside world, to allow users to make queries, post Sources, check the status of the crawling activities etc." required: - - "host" - - "port" - - "timeout" + - "host" + - "timeout" selenium: type: "array" items: @@ -238,9 +281,12 @@ properties: description: "This is the type of selenium driver that the CROWler will use to crawl websites. For example, chrome or firefox." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the port that the selenium driver will use to connect to the CROWler. It is the port that the selenium driver will use to connect to the CROWler." host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the VDI host name or IP that the CROWler will use to connect to the VDI. It is the host that will be used to fetch web pages and that runs Selenium, RBee etc. For example, localhost. This is also the recommended way to use and connect to a VDI (in other words, don't try to run selenium, Rbee etc. locally, use a container for the VDI)." headless: type: "boolean" @@ -251,14 +297,16 @@ properties: sslmode: type: "string" description: "This is the sslmode that the selenium driver will use to connect to the CROWler. It is the sslmode that the selenium driver will use to connect to the CROWler." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" download_path: type: "string" description: "This is the temporary download path for the VDI. It is the local path where the VDI will download files. This is useful for downloading files from websites (like pdf or zip etc.). The CROWler will use this path to temporarily store the downloaded files (before moving them to the storage files area)." required: - - "type" - - "path" - - "port" - - "host" + - "type" + - "host" additionalProperties: "false" description: "This is the configuration for the selenium driver. It is the configuration for the selenium driver that the CROWler will use to crawl websites. To scale the CROWler web crawling capabilities, you can add multiple selenium drivers in the array." image_storage: @@ -266,12 +314,15 @@ properties: properties: host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the remote host for the image storage request." path: type: "string" description: "This is the path to the image storage. It is the path to the storage that the CROWler will use to store images. if the image storage is local, this is the path to the local directory where the images will be stored. If the image storage is remote, this is the path to the remote storage where the images will be stored." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the remote port for the image storage request." region: type: "string" @@ -291,21 +342,29 @@ properties: sslmode: type: "string" description: "This is the ssl mode for the image storage request for remote storage. Use enable to force https over http." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" + additionalProperties: "false" description: "This is the configuration for the image storage. It is the configuration for the storage that the CROWler will use to store images." required: - - "path" - - "type" + - "path" + - "type" file_storage: type: "object" properties: host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the remote host for the file storage request." path: type: "string" description: "This is the path to the file storage. It is the path to the storage that the CROWler will use to store files. if the file storage is local, this is the path to the local directory where the files will be stored. If the file storage is remote, this is the path to the remote storage where the files will be stored." port: type: "integer" + minimum: "1" + maximum: "65535" description: "This is the remote port for the file storage request." region: type: "string" @@ -325,10 +384,15 @@ properties: sslmode: type: "string" description: "This is the ssl mode for the file storage request for remote storage. Use enable to force https over http." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" + additionalProperties: "false" description: "This is the configuration for the file storage. File storage will be used for web object content storage." required: - - "path" - - "type" + - "path" + - "type" network_info: type: "object" properties: @@ -344,8 +408,10 @@ properties: rate_limit: type: "string" description: "This is the rate limit for the DNS database. It is the maximum number of requests that the CROWler will send to the DNS database per second. You can use the ExprTerpreter language to set the rate limit." + additionalProperties: "false" + description: "This is the configuration for the DNS data collection. It is the configuration for the DNS data collection that the CROWler will use to detect the IP address of a domain, subdomains etc." required: - - "enabled" + - "enabled" whois: type: "object" properties: @@ -358,8 +424,10 @@ properties: rate_limit: type: "string" description: "This is the rate limit for the whois database. It is the maximum number of requests that the CROWler will send to the whois database per second. You can use the ExprTerpreter language to set the rate limit." + description: "This is the configuration for the whois data collection. It is the configuration for the whois data collection that the CROWler will use to detect the owner of a domain." + additionalProperties: "false" required: - - "enabled" + - "enabled" netlookup: type: "object" properties: @@ -372,8 +440,10 @@ properties: rate_limit: type: "string" description: "This is the rate limit for the netlookup database. It is the maximum number of requests that the CROWler will send to the netlookup database per second. You can use the ExprTerpreter language to set the rate limit." + additionalProperties: "false" + description: "This is the configuration for the netlookup data collection. It is the configuration for the netlookup data collection that the CROWler will use to detect the network information of a host." required: - - "enabled" + - "enabled" geo_localization: type: "object" properties: @@ -394,10 +464,16 @@ properties: description: "This is the API key for the geolocation database. It is the API key that the CROWler will use to connect to the geolocation database." sslmode: type: "string" - description: "This is the sslmode for the geolocation database. It is the sslmode that the CROWler will use to connect to the geolocation database." + description: "This is the sslmode that the CROWler will use to connect to the geolocation database." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" + additionalProperties: "false" + description: "This is the configuration for the geolocation data collection. It is the configuration for the geolocation data collection that the CROWler will use to detect the location of a host." required: - - "enabled" - - "path" + - "enabled" + - "path" service_scout: type: "object" properties: @@ -413,10 +489,14 @@ properties: properties: host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "Host FQDN or IP address." port: type: "integer" + minimum: "1" + maximum: "65535" description: "Port number." + additionalProperties: "false" ping_scan: type: "boolean" description: "This is a flag that tells the CROWler to use ping scanning techniques. This is useful for detecting hosts that are alive." @@ -448,15 +528,33 @@ properties: type: "array" items: type: "string" + description: "This is a list of nmap and crowler-scanner scripts to run. This is particularly important when a user wants to do vulnerability scanning." + examples: + - "http-enum" + - "http-headers" + - "default" + - "vuln" description: "This is a list of nmap scripts to run. This is particularly important when a user wants to do vulnerability scanning." excluded_hosts: type: "array" items: type: "string" + oneOf: + - format: "ipv4" + - format: "ipv6" + - format: "hostname" + examples: + - "example.com" + - "192.168.0.1" + - "localhost" + - "2001:0db8:85a3:0000:0000:8a2e:0370:7334" + additionalProperties: "false" description: "This is a list of hosts to exclude from the scan. The CROWler may encounter such hosts during its crawling activities, so this field makes it easy to define a list of hosts that it should always avoid scanning." timing_template: type: "string" description: "This allows the user to set the timing template for the scan. The timing template is a string that is passed to nmap to set the timing of the scan. DO not specify values using Tx, where x is a number. Instead, use just the number, e.g., '3'." + examples: + - "3" host_timeout: type: "string" description: "This is the timeout for the scan. It is the maximum amount of time that the CROWler will wait for a host to respond to a scan." @@ -503,14 +601,19 @@ properties: type: "array" items: type: "string" + additionalProperties: "false" description: "This is a list of custom DNS servers." proxies: type: "array" items: type: "string" + additionalProperties: "false" description: "Proxies for the database connection." + additionalProperties: "false" + description: "This is the configuration for the service scout data collection. It is the configuration for the service scout data collection that the CROWler will use to detect services that are running on a host, network vulnerabilities, network software versions etc." required: - - "enabled" + - "enabled" + additionalProperties: "false" description: "This is the configuration for the network information collection." rulesets: type: "array" @@ -522,12 +625,20 @@ properties: items: type: "string" description: "This is the path that the CROWler will use to fetch the ruleset. You can use wildcard to fetch multiple rulesets. for example './rules/*.yaml'." + examples: + - + - "./rules/*.yaml" host: type: "string" + pattern: "^(((([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]))|((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)+([A-Za-z]{2,}))|(\\[([0-9a-fA-F]{1,4}\\:{1,2}){7}[0-9a-fA-F]{1,4}\\]))$" description: "This is the host that the CROWler will use to fetch the ruleset." port: type: "string" + minimum: "1" + maximum: "65535" description: "This is the port that the CROWler will use to fetch the ruleset." + examples: + - "443" region: type: "string" description: "This is the region that the CROWler will use to fetch the ruleset (for example, if you're hosting your ruleset on an AWS S3 bucket)." @@ -546,12 +657,17 @@ properties: sslmode: type: "string" description: "This is the sslmode that the CROWler will use to connect to the distribution server to fetch the ruleset. Use 'enable' to force https over http." + pattern: "^(enable|disable|)$" + examples: + - "enable" + - "disable" refresh: type: "integer" description: "This is the refresh interval in seconds for the CROWler to fetch the ruleset (refresh it)." + additionalProperties: "false" required: - - "path" - - "type" + - "path" + - "type" description: "This is the configuration for the rulesets that the CROWler will use to crawl, interact, scrape info and detect stuff on the provided Sources to crawl." external_detection: description: "This is the configuration for the external detection services like VirusTotal, URLHaus, PhishTank, GoogleSafeBrowsing, AbuseIPDB, OpenPhish, Cuckoo, HybridAnalysis, CiscoUmbrella, AlienVault, IPVoid, Shodan, Censys, SSLLabs." @@ -574,22 +690,26 @@ properties: delay: type: "string" description: "This is the delay time (in seconds) between requests for the external detection service." - full_site: - type: "boolean" - description: "This is a flag that tells the CROWler to use the external detection service only for the Source URL only or for every URLs being crawler for the Source. (default is false, to reduce traffic over 3rd party services)." + additionalProperties: "false" required: - - "name" - - "api_key" + - "name" + - "api_key" os: type: "string" + description: "This is the operating system that the CROWler will use to run. For example, linux, windows or macos. This field is set automatically by the CROWler itself, so no need to set it manually." debug_level: type: "integer" + description: "This is the debug level for the CROWler. It is the level of debugging that the CROWler will use to log messages. The higher the level, the more messages will be logged." + examples: + - "1" anyOf: - - required: - - "remote" - - required: - - "database" - - "crawler" - - "api" - - "selenium" - - "network_info" +- + required: + - "remote" +- + required: + - "database" + - "crawler" + - "api" + - "selenium" + - "network_info" diff --git a/schemas/ruleset-schema.json b/schemas/ruleset-schema.json index cbb3c88..189e1c2 100644 --- a/schemas/ruleset-schema.json +++ b/schemas/ruleset-schema.json @@ -96,8 +96,10 @@ "xpath", "id", "class_name", + "class", "name", "tag_name", + "element", "link_text", "partial_link_text", "regex", @@ -243,6 +245,55 @@ "type": "string", "description": "A unique name identifying the action rule." }, + "url": { + "type": "string", + "format": "uri", + "description": "Optional. The specific URL to which this action applies or the URL to navigate to, applicable for navigate action. Do not use this field for 'navigate_to_url' action type, use instead the value field to specify the url to go to, url field is only to match the rule." + }, + "wait_conditions": { + "type": "array", + "items": { + "type": "object", + "properties": { + "condition_type": { + "type": "string", + "enum": [ + "element_presence", + "element_visible", + "plugin_call", + "delay" + ] + }, + "value": { + "type": "string", + "description": "a generic value to use with the condition, e.g., a delay in seconds, applicable for delay condition type. For delay type you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." + }, + "selector": { + "type": "string", + "description": "The CSS selector for the element, applicable for element_presence and element_visible conditions. If you're using plugin_call, then this field is used for the plugin name." + } + } + }, + "description": "Conditions to wait for, that must be met before the action is executed. These conditions are designed to ensure that the page or elements are ready (e.g., waiting for an element to appear, or a delay). Do not use this field to wait after an action is performed, as it only applies before the action is executed." + }, + "conditions": { + "type": "object", + "properties": { + "type": { + "type": "string", + "enum": [ + "element", + "language", + "plugin_call" + ] + }, + "selector": { + "type": "string", + "description": "The CSS selector to check if a given element exists, applicable for 'element'. The language id to check if a page is in a certain language, applicable for 'language'. The plugin's name if you're using plugin_call." + } + }, + "description": "Conditions that must be met for the action to be executed. For example, you can check if a certain element exists on the page before performing an action. See this as something to do after we waited for the wait_conditions and we verify that the page is ready to perform the action." + }, "action_type": { "type": "string", "enum": [ @@ -287,8 +338,10 @@ "xpath", "id", "class_name", + "class", "name", "tag_name", + "element", "link_text", "partial_link_text", "plugin_call" @@ -329,55 +382,6 @@ "type": "string", "description": "The value to use with the action, e.g., text to input, applicable for input_text." }, - "url": { - "type": "string", - "format": "uri", - "description": "Optional. The specific URL to which this action applies or the URL to navigate to, applicable for navigate action. Do not use this field for 'navigate_to_url' action type, use instead the value field to specify the url to go to, url field is only to match the rule." - }, - "wait_conditions": { - "type": "array", - "items": { - "type": "object", - "properties": { - "condition_type": { - "type": "string", - "enum": [ - "element_presence", - "element_visible", - "plugin_call", - "delay" - ] - }, - "value": { - "type": "string", - "description": "a generic value to use with the condition, e.g., a delay in seconds, applicable for delay condition type. For delay type you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." - }, - "selector": { - "type": "string", - "description": "The CSS selector for the element, applicable for element_presence and element_visible conditions. If you're using plugin_call, then this field is used for the plugin name." - } - } - }, - "description": "Conditions to wait before being able to perform the action. This to ensure page readiness." - }, - "conditions": { - "type": "object", - "properties": { - "type": { - "type": "string", - "enum": [ - "element", - "language", - "plugin_call" - ] - }, - "selector": { - "type": "string", - "description": "The CSS selector to check if a given element exists, applicable for 'element'. The language id to check if a page is in a certain language, applicable for 'language'. The plugin's name if you're using plugin_call." - } - }, - "description": "Conditions that must be met for the action to be executed." - }, "error_handling": { "type": "object", "properties": { diff --git a/schemas/ruleset-schema.yaml b/schemas/ruleset-schema.yaml index 3ffecbb..c870871 100644 --- a/schemas/ruleset-schema.yaml +++ b/schemas/ruleset-schema.yaml @@ -82,8 +82,10 @@ items: - "xpath" - "id" - "class_name" + - "class" - "name" - "tag_name" + - "element" - "link_text" - "partial_link_text" - "regex" @@ -186,6 +188,42 @@ items: rule_name: type: "string" description: "A unique name identifying the action rule." + url: + type: "string" + format: "uri" + description: "Optional. The specific URL to which this action applies or the URL to navigate to, applicable for navigate action. Do not use this field for 'navigate_to_url' action type, use instead the value field to specify the url to go to, url field is only to match the rule." + wait_conditions: + type: "array" + items: + type: "object" + properties: + condition_type: + type: "string" + enum: + - "element_presence" + - "element_visible" + - "plugin_call" + - "delay" + value: + type: "string" + description: "a generic value to use with the condition, e.g., a delay in seconds, applicable for delay condition type. For delay type you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." + selector: + type: "string" + description: "The CSS selector for the element, applicable for element_presence and element_visible conditions. If you're using plugin_call, then this field is used for the plugin name." + description: "Conditions to wait for, that must be met before the action is executed. These conditions are designed to ensure that the page or elements are ready (e.g., waiting for an element to appear, or a delay). Do not use this field to wait after an action is performed, as it only applies before the action is executed." + conditions: + type: "object" + properties: + type: + type: "string" + enum: + - "element" + - "language" + - "plugin_call" + selector: + type: "string" + description: "The CSS selector to check if a given element exists, applicable for 'element'. The language id to check if a page is in a certain language, applicable for 'language'. The plugin's name if you're using plugin_call." + description: "Conditions that must be met for the action to be executed. For example, you can check if a certain element exists on the page before performing an action. See this as something to do after we waited for the wait_conditions and we verify that the page is ready to perform the action." action_type: type: "string" enum: @@ -228,8 +266,10 @@ items: - "xpath" - "id" - "class_name" + - "class" - "name" - "tag_name" + - "element" - "link_text" - "partial_link_text" - "plugin_call" @@ -257,42 +297,6 @@ items: value: type: "string" description: "The value to use with the action, e.g., text to input, applicable for input_text." - url: - type: "string" - format: "uri" - description: "Optional. The specific URL to which this action applies or the URL to navigate to, applicable for navigate action. Do not use this field for 'navigate_to_url' action type, use instead the value field to specify the url to go to, url field is only to match the rule." - wait_conditions: - type: "array" - items: - type: "object" - properties: - condition_type: - type: "string" - enum: - - "element_presence" - - "element_visible" - - "plugin_call" - - "delay" - value: - type: "string" - description: "a generic value to use with the condition, e.g., a delay in seconds, applicable for delay condition type. For delay type you can also use the CROWler exprterpreter to generate delay values at runtime, e.g., 'random(1, 3)' or 'random(random(1,3), random(5,8))'." - selector: - type: "string" - description: "The CSS selector for the element, applicable for element_presence and element_visible conditions. If you're using plugin_call, then this field is used for the plugin name." - description: "Conditions to wait before being able to perform the action. This to ensure page readiness." - conditions: - type: "object" - properties: - type: - type: "string" - enum: - - "element" - - "language" - - "plugin_call" - selector: - type: "string" - description: "The CSS selector to check if a given element exists, applicable for 'element'. The language id to check if a page is in a certain language, applicable for 'language'. The plugin's name if you're using plugin_call." - description: "Conditions that must be met for the action to be executed." error_handling: type: "object" properties: