From 6623f2b3e27506f941f73b5604d5430edf298794 Mon Sep 17 00:00:00 2001 From: Mohamed Nadjib MAMI Date: Mon, 14 May 2018 12:02:44 +0200 Subject: [PATCH] clarification about spark separate installation --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 578979c..0f9a726 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ cd target/scala-xyz # xyz is the version of Scala installed ``` ...you find a *sparkall_01.jar* file. -- Sparkall uses Spark as query engine. Therefore Spark has to be installed beforehand. Download Spark from the [official website](https://spark.apache.org/downloads.html). Then configure a standalone cluster using the [official documentation page](https://spark.apache.org/docs/2.2.0/spark-standalone.html). +- Sparkall uses Spark as query engine. Therefore Spark has to be installed beforehand. Download Spark from the [official website](https://spark.apache.org/downloads.html). In order for Sparkall to run in a cluster, you need to cnfigure a standalone cluster using the [official documentation page](https://spark.apache.org/docs/2.2.0/spark-standalone.html). Spark may come included with Sparkall in the future, but for now, it has to be separetly installed. - Now you can run Sparkall using `spark-submit` giving in args three files ---built using [Sparkall-GUI](https://github.com/EIS-Bonn/sparkall-gui) (see below). The command line looks like: