StandardScaler
を使用して機能を正規化します。
これが私のコードです:
val Array(trainingData, testData) = dataset.randomSplit(Array(0.7,0.3))
val vectorAssembler = new VectorAssembler().setInputCols(inputCols).setOutputCol("features").transform(trainingData)
val stdscaler = new StandardScaler().setInputCol("features").setOutputCol("scaledFeatures").setWithStd(true).setWithMean(false).fit(vectorAssembler)
StandardScaler
を使用しようとすると例外がスローされました
[Stage 151:==> (9 + 2) / 200]16/12/28 20:13:57 WARN scheduler.TaskSetManager: Lost task 31.0 in stage 151.0 (TID 8922, slave1.hadoop.ml): org.Apache.spark.SparkException: Values to assemble cannot be null.
at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:159)
at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:142)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.Apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:142)
at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:98)
at org.Apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:97)
at org.Apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.Apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.Java:43)
at org.Apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$24.apply(RDD.scala:1093)
at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
at org.Apache.spark.rdd.RDD$$anonfun$treeAggregate$1$$anonfun$25.apply(RDD.scala:1094)
at org.Apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.Apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:766)
at org.Apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.Apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.Apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.Apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.Apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.Apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.Apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.Apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.Apache.spark.scheduler.Task.run(Task.scala:85)
at org.Apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.Java:1142)
at Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.Java:617)
at Java.lang.Thread.run(Thread.Java:745)
VectorAssembler
に問題はありますか?
VectorAssembler
の数行を確認しましたが、問題はないようです。
vectorAssembler.take(5)
スパーク> = 2.4
Spark 2.4 VectorAssembler
extends HasHandleInvalid
です。つまり、skip
を実行できるということです。
assembler.setHandleInvalid("skip").transform(df).show
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+
keep
(MLアルゴリズムがこれを正しく処理する可能性は低いことに注意してください):
assembler.setHandleInvalid("keep").transform(df).show
+----+----+---------+
| x1| x2| features|
+----+----+---------+
| 1.0|null|[1.0,NaN]|
|null| 2.0|[NaN,2.0]|
| 3.0| 4.0|[3.0,4.0]|
+----+----+---------+
またはデフォルトはerror
です。
スパーク<2.4
VectorAssembler
に問題はありません。 Spark Vector
にnull
の値を含めることはできません。
import org.Apache.spark.ml.feature.VectorAssembler
val df = Seq(
(Some(1.0), None), (None, Some(2.0)), (Some(3.0), Some(4.0))
).toDF("x1", "x2")
val assembler = new VectorAssembler()
.setInputCols(df.columns).setOutputCol("features")
assembler.transform(df).show(3)
org.Apache.spark.SparkException: Failed to execute user defined function($anonfun$3: (struct<x1:double,x2:double>) => vector)
...
Caused by: org.Apache.spark.SparkException: Values to assemble cannot be null.
NullはMLアルゴリズムでは意味がなく、scala.Double
を使用して表すことはできません。
あなたはどちらかを落とさなければなりません:
assembler.transform(df.na.drop).show(2)
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|3.0|4.0|[3.0,4.0]|
+---+---+---------+
またはfill/impute(参照 欠落している値を平均で置き換える-Spark Dataframe ):)
// For example with averages
val replacements: Map[String,Any] = Map("x1" -> 2.0, "x2" -> 3.0)
assembler.transform(df.na.fill(replacements)).show(3)
+---+---+---------+
| x1| x2| features|
+---+---+---------+
|1.0|3.0|[1.0,3.0]|
|2.0|2.0|[2.0,2.0]|
|3.0|4.0|[3.0,4.0]|
+---+---+---------+
nulls
。