本文以 Spark 2.x 操作 Iceberg 表为例介绍如何通过 Spark Structured Streaming 流式读写 Iceberg 表。
适合 E-MapReduce(EMR) 2.x 的版本
已创建 EMR 集群,且安装有 Iceberg 组件。有两种方式可以安装 Iceberg 组件:
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.4.8</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.apache.iceberg</groupId> <artifactId>iceberg-spark-runtime-3.2_2.12</artifactId> <version>0.14.0</version> <scope>compile</scope> </dependency>
Spark Structured Streaming 通过 DataStreamWriter 接口流式写数据到 Iceberg 表,代码如下。
val name = TableIdentifier.of("default","spark2_streaming_demo") val tableIdentifier = name.toString val checkpointPath: String = "/tmp/iceberg_checkpointPath" data.writeStream .format("iceberg") .outputMode("append") .trigger(Trigger.ProcessingTime(1, TimeUnit.MINUTES)) .option("path", tableIdentifier) .option("checkpointLocation", checkpointPath) .start()
说明
代码中的 tableIdentifier 是元数据表名或者表路径。checkpointPath 是 spark 流数据处理程序使用的checkpoint地址。流式写入支持以下两种方式:
append:追加每个批次的数据到Iceberg表,相当于insert into。
complete:使用最新批次的数据完全覆盖Iceberg,相当于insert overwrite。
val df = spark.readStream .format("iceberg") .option("stream-from-timestamp", Long.toString(streamStartTimestamp)) .load("database.table_name")
本示例上采用 linux 的 netcat 命令发送数据,Spark 接收数据后写入 Iceberg 表中。
以Scala版代码为例,代码示例如下。
import org.apache.iceberg.Schema import org.apache.iceberg.catalog.TableIdentifier import org.apache.iceberg.hive.HiveCatalog import org.apache.iceberg.types.Types import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.apache.spark.sql.streaming.Trigger object IcebergSpark2StreamingScalaExample { def main(args: Array[String]): Unit = { // 配置使用数据湖元数据。 val sparkConf = new SparkConf() val spark = SparkSession .builder() .config(sparkConf) .appName("IcebergSparkStreamingScalaExample") .getOrCreate() import spark.implicits._ val name = TableIdentifier.of("default","spark2_streaming_demo") val tableName = name.toString val warehouseLocation = "/warehouse/tablespace/managed/hive" val catalog = new HiveCatalog() catalog.setConf(spark.sparkContext.hadoopConfiguration) val properties = new util.HashMap[String, String] properties.put("warehouse", "/user/hive/warehouse/iceberg/hive") properties.put("uri", "thrift://emr-master-1:9083") catalog.initialize("hive", properties) val schema = new Schema( Types.NestedField.optional(1, "value", Types.StringType.get())) try { // 创建 Iceberg 表 catalog.createTable(name, schema) } catch { case _: org.apache.iceberg.exceptions.AlreadyExistsException => } // Create DataFrame representing the stream of input lines from connection to localhost:9999 val lines = spark.readStream .format("socket") .option("host", "localhost") .option("port", 9999) .load() // Split the lines into words val words = lines.as[String].flatMap(_.split(" ")) val checkpointPath = "/tmp/iceberg_checkpointPath" // 流式写入Iceberg表 val query = words.toDF().writeStream .format("iceberg") .outputMode("append") .trigger(Trigger.ProcessingTime("2 seconds")) .option("checkpointLocation", checkpointPath) .option("path", tableName) .start() query.awaitTermination() } }
打包程序并部署到EMR集群。
<build> <plugins> <!-- the Maven Scala plugin will compile Scala source files --> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.2</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> </plugins> </build>
mvn clean package
通过 Linux 的 netcat 命令准备一些数据
netcat -lk -p 9999
并输入一些字符串。
通过 spark-submit 命令运行 Spark 作业
spark-submit --class com.bytedance.IcebergSpark2StreamingScalaExample iceberg-spark2-example-1.0.jar
说明
class 名字和 JAR 包,需根据自己代码工程修改。上述的 iceberg-spark2-example-1.0.jar 就是根据代码工程打出的JAR包。
通过 spark-shell 查看 Iceberg 表的数据运行结果如下
spark-shell --master yarn
在 spark-shell 控制台中执行下面的代码:
import org.apache.iceberg.catalog.TableIdentifier val name = TableIdentifier.of("default","spark2_streaming_demo") val tableName = name.toString spark.read.format("iceberg").load(tableName).show()
可以打印出第4步输入的数据。