I found that AWS Glue set up executor's instance with memory limit to 5 Gb --conf spark.executor.memory=5g
and some times, on a big datasets it fails with java.lang.OutOfMemoryError
. The same is for driver instance --spark.driver.memory=5g
.
Is there any option to increase this value?