Spark Autotuning VS Spark Dynamic Allocation
In our CDH cluster we have enabled Spark Dynamic Allocation. I see that you have an interesting feature called Spark Auto-tuning that seems to behave similarly to Dynamic Allocation. Do they work hand in hand? Do I need one more than the other?
Could you highlight the differences between the two and what would be best practice for Alpine Chorus.