I have a bit of a question around PySpark.
After aggregating, I have really skewed data (some partitions are massive).
If I repartition; it takes ages, as the dataset is huge.
Is repartitioning the best option to re-align these partition sizes?
If I were to join this to another dataset; should I repartition before doing the join?
Any thoughts on best practice would be really appreciated!
Thanks a lot
question from:https://stackoverflow.com/questions/65869200/repartitioning-skewed-dataframes-in-spark