pyspark – avoiding multiple alias of the table in the join condition
This is the piece of code that’s giving correct output but performancewise is very worst.
Is there a way where ‘modelled_nbd_item_sub_cat_map’ table used only once and get the same result?