How can I implement a for-loop in Spark where I overwrite the old/original dataframe on each iteration? Something like this:
val columns = Seq("a","b")
val data = Seq((1, 102),
(2, 103),
(3, 104)
)
val df = data.toDF(columns:_*)
for( iteration <- 1 to 3) yield{
val temp = df.filter($"b" >= 100).withColumn("b", exampleUDF(lit(iteration), $"b"))
//
// other computation stuff
//
df = temp
}