Question 224:
Kelly is working as a Data Engineer of Whizlabs Inc. She’s working on Databricks Spark job execution using data frames but faces the following error message: “Serialized task is too large” Which of the following Spark configuration properties is required to be amended?
Answer options:
A.Call parallelize with a large list or convert a large R DataFrame to a Spark DataFrame. B.Setting this value in the notebook with “spark.conf.set()” C.Setting up the property – “spark.databricks.delta.preview.enabled” as true D.Use a job cluster instead of an interactive cluster