利用可能なPysparkのマニュアルからSparkコードを複製するのに問題がある こちら
たとえば、Grouped Map
に関連する次のコードを試すと、次のようにします。
import numpy as np
import pandas as pd
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql import SparkSession
spark.stop()
spark = SparkSession.builder.appName("New_App_grouped_map").getOrCreate()
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
df = spark.createDataFrame(
[(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 10.0)],
("id", "v"))
@pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
def subtract_mean(pdf):
v = pdf.v
return pdf.assign(v=v - v.mean())
df.groupby("id").apply(subtract_mean).show()
次のエラーログが表示されます。
主なエラー:
ERROR ArrowPythonRunner: Python worker exited unexpectedly (crashed)
Caused by: Java.lang.UnsupportedOperationException: Sun.misc.Unsafe or Java.nio.Direct
ByteBuffer.<init>(long, int) not available
私は関連するパッケージに次のバージョンを使用しています、いくつかの互換性の問題があるかもしれません:
pyarrow==0.17.1
pandas==1.0.4
numpy==1.18.4
私は別のC:\spark\
フォルダでsparkをダウンロードしましたので、sparkフォルダにグローバルにインストールしたpyarrow
パッケージを移動する必要があるかどうかわからない。その問題は?
完全エラーログ:
>>> df.groupby("id").apply(subtract_mean).show()
[Stage 16:======================================================>(99 + 1) / 100]20/05/
30 16:57:17 ERROR ArrowPythonRunner: Python worker exited unexpectedly (crashed)
org.Apache.spark.api.python.PythonException: Traceback (most recent call last):
File "C:\spark\python\lib\pyspark.Zip\pyspark\worker.py", line 577, in main
File "C:\spark\python\lib\pyspark.Zip\pyspark\serializers.py", line 837, in read_int
raise EOFError
EOFError
at org.Apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonExc
eption(PythonRunner.scala:484)
at org.Apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:99)
at org.Apache.spark.sql.execution.python.PythonArrowOutput$$anon$1.read(Python
ArrowOutput.scala:49)
at org.Apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonR
unner.scala:437)
at org.Apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:
37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
at org.Apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorF
orCodegenStage3.processNext(Unknown Source)
at org.Apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowItera
tor.Java:43)
at org.Apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeS
tageCodegenExec.scala:726)
at org.Apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPl
an.scala:321)
at org.Apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
at org.Apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala
:872)
at org.Apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.Apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.Apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.Apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.Apache.spark.scheduler.Task.run(Task.scala:127)
at org.Apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala
:441)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444)
at Java.base/Java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecu
tor.Java:1130)
at Java.base/Java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExec
utor.Java:630)
at Java.base/Java.lang.Thread.run(Thread.Java:832)
Caused by: Java.lang.UnsupportedOperationException: Sun.misc.Unsafe or Java.nio.Direct
ByteBuffer.<init>(long, int) not available
at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.Java:243)
at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.Java:233)
at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.Java:245)
at org.Apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.Java:222)
at org.Apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.Java:240)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.Java:1
32)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.Java:120)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
at scala.runtime.Java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.Java:23)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
at org.Apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR ArrowPythonRunner: This may have been caused by a prior except
ion:
Java.lang.UnsupportedOperationException: Sun.misc.Unsafe or Java.nio.DirectByteBuffer.
<init>(long, int) not available
at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.Java:243)
at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.Java:233)
at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.Java:245)
at org.Apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.Java:222)
at org.Apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.Java:240)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.Java:1
32)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.Java:120)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
at scala.runtime.Java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.Java:23)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
at org.Apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR Executor: Exception in task 44.0 in stage 16.0 (TID 159)
Java.lang.UnsupportedOperationException: Sun.misc.Unsafe or Java.nio.DirectByteBuffer.
<init>(long, int) not available
at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.Java:243)
at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.Java:233)
at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.Java:245)
at org.Apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.Java:222)
at org.Apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.Java:240)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.Java:1
32)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.Java:120)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
at scala.runtime.Java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.Java:23)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
at org.Apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
20/05/30 16:57:17 ERROR TaskSetManager: Task 44 in stage 16.0 failed 1 times; aborting
job
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\spark\python\pyspark\sql\dataframe.py", line 407, in show
print(self._jdf.showString(n, 20, vertical))
File "C:\spark\python\lib\py4j-0.10.8.1-src.Zip\py4j\Java_gateway.py", line 1286, in
__call__
File "C:\spark\python\pyspark\sql\utils.py", line 98, in deco
return f(*a, **kw)
File "C:\spark\python\lib\py4j-0.10.8.1-src.Zip\py4j\protocol.py", line 328, in get_
return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o170.showString.
: org.Apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage
16.0 failed 1 times, most recent failure: Lost task 44.0 in stage 16.0 (TID 159, DESKT
OP-ASG768U, executor driver): Java.lang.UnsupportedOperationException: Sun.misc.Unsafe
or Java.nio.DirectByteBuffer.<init>(long, int) not available
at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.Java:243)
at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.Java:233)
at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.Java:245)
at org.Apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.Java:222)
at org.Apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.Java:240)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.Java:1
32)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.Java:120)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
at scala.runtime.Java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.Java:23)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
at org.Apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
Driver stacktrace:
at org.Apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGSche
duler.scala:1989)
at org.Apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.
scala:1977)
at org.Apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGSc
heduler.scala:1976)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.Apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976)
at org.Apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGS
cheduler.scala:956)
at org.Apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adap
ted(DAGScheduler.scala:956)
at scala.Option.foreach(Option.scala:407)
at org.Apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.sc
ala:956)
at org.Apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGSche
duler.scala:2206)
at org.Apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2155)
at org.Apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSchedu
ler.scala:2144)
at org.Apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.Apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:758)
at org.Apache.spark.SparkContext.runJob(SparkContext.scala:2116)
at org.Apache.spark.SparkContext.runJob(SparkContext.scala:2137)
at org.Apache.spark.SparkContext.runJob(SparkContext.scala:2156)
at org.Apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:431)
at org.Apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:
47)
at org.Apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3482)
at org.Apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2581)
at org.Apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3472)
at org.Apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(
SQLExecution.scala:100)
at org.Apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecu
tion.scala:160)
at org.Apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecutio
n.scala:87)
at org.Apache.spark.sql.Dataset.withAction(Dataset.scala:3468)
at org.Apache.spark.sql.Dataset.head(Dataset.scala:2581)
at org.Apache.spark.sql.Dataset.take(Dataset.scala:2788)
at org.Apache.spark.sql.Dataset.getRows(Dataset.scala:297)
at org.Apache.spark.sql.Dataset.showString(Dataset.scala:334)
at Java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Meth
od)
at Java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethod
AccessorImpl.Java:62)
at Java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Delegati
ngMethodAccessorImpl.Java:43)
at Java.base/Java.lang.reflect.Method.invoke(Method.Java:564)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.Java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.Java:357)
at py4j.Gateway.invoke(Gateway.Java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.Java:132)
at py4j.commands.CallCommand.execute(CallCommand.Java:79)
at py4j.GatewayConnection.run(GatewayConnection.Java:238)
at Java.base/Java.lang.Thread.run(Thread.Java:832)
Caused by: Java.lang.UnsupportedOperationException: Sun.misc.Unsafe or Java.nio.Direct
ByteBuffer.<init>(long, int) not available
at io.netty.util.internal.PlatformDependent.directBuffer(PlatformDependent.jav
a:473)
at io.netty.buffer.NettyArrowBuf.getDirectBuffer(NettyArrowBuf.Java:243)
at io.netty.buffer.NettyArrowBuf.nioBuffer(NettyArrowBuf.Java:233)
at io.netty.buffer.ArrowBuf.nioBuffer(ArrowBuf.Java:245)
at org.Apache.arrow.vector.ipc.message.ArrowRecordBatch.computeBodyLength(Arro
wRecordBatch.Java:222)
at org.Apache.arrow.vector.ipc.message.MessageSerializer.serialize(MessageSeri
alizer.Java:240)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeRecordBatch(ArrowWriter.Java:1
32)
at org.Apache.arrow.vector.ipc.ArrowWriter.writeBatch(ArrowWriter.Java:120)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$wr
iteIteratorToStream$1(ArrowPythonRunner.scala:94)
at scala.runtime.Java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.Java:23)
at org.Apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.Apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIterat
orToStream(ArrowPythonRunner.scala:101)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(Py
thonRunner.scala:373)
at org.Apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1932)
at org.Apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.
scala:213)
これはちょうど私にも起こった。
インストールした Java 8 JDKに_Java_HOME
_を設定して修正することができました。私のために、GCE VMでは、それは次のとおりです。
_export Java_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-AMD64/
_
これがJDKを追加したため、または Java 8に移動したためにこのうちに機能するかどうかわかりません VM Java 11 JREがありましたがJDKはありません。
JonesBergの回答に追加すると、このパラメータは、次のように外部構成ファイルではなくPySpark呼び出し自体に設定できます。
conf = {"spark.driver.extraJavaOptions":
"-Dio.netty.tryReflectionSetAccessible=true",
"spark.executor.extraJavaOptions":
"-Dio.netty.tryReflectionSetAccessible=true"
}
SparkSession.builder.config(conf=conf).getOrCreate()
_