How does spark determine the number of partition to create in a RDD during its initial data load?
I was using Pyspark interactive shell with following
Python version 3.12.4
Spark : 3.5.1
I was using Pyspark interactive shell with following
Python version 3.12.4
Spark : 3.5.1