-
Notifications
You must be signed in to change notification settings - Fork 11
Spark Connector
Spark Connector module helps connect Apache Spark big data processing engine with Hyperledger Fabric for data analysis.
It can currently query from channel's ledger and can insert data into world state. These operations can be done with the help of FabricSpark object provided by the connector.
Data can be queried using load method of FabricSpark object.
import com.impetus.blkch.spark.connector.rdd.ReadConf
import org.apache.spark.sql.fabric.FabricSpark
import FabricSpark.implicits._
Class.forName("com.impetus.fabric.jdbc.FabricDriver")
val readConf = ReadConf(Some(4), None, "Select * from transaction_action where block_no <= 10") // ReadConf provides configuration for reading data from Fabric
val dataframe = FabricSpark.load(spark, readConf, Map("spark.fabric.configpath" -> "<Config Path>", "spark.fabric.username" -> "<Username>", "spark.fabric.password" -> "<Secret>"))
NOTE
spark.fabric.* properties in the above section of code can also be provided as --conf parameters in spark-submit job by using overloaded load method.
Contents of dataframe can be directly inserted into World State using save method of FabricSpark object.
import com.impetus.fabric.spark.connector.rdd.WriteConf
import org.apache.spark.sql.fabric.FabricSpark
import FabricSpark.implicits._
Class.forName("com.impetus.fabric.jdbc.FabricDriver")
val writeConf = WriteConf("assetchain", "iPostUsers") // WriteConf provides chaincode details using which data will be inserted
FabricSpark.save(dataframe, writeConf, Map("spark.fabric.configpath" -> "<Config Path>", "spark.fabric.username" -> "<Username>", "spark.fabric.password" -> "<Secret>"))
NOTE
spark.fabric.* properties in the above section of code can also be provided as --conf parameters in spark-submit job by using overloaded save method.