Question 239:
Ronald is an Azure Data Engineer of Fabrikum Inc, where he’s assigned to optimize Azure Databricks Spark jobs. Now, in the Azure Databricks cluster, the Spark -submit jobs are failing with the following error message – <pre> “Failed to parse byte string: -1 “ & showing the following output in console – java.util.concurrent.ExecutionException: java.lang.NumberFormatException: Size must be specified as bytes (b), kilobytes (k), megabytes (m), gigabytes (g), terabytes (t), or petabytes(p). E.g. 100b, 200k, or 350mb. Failed to parse byte string: -1 …….</pre> What resolution can he apply to mitigate the above issue?
Answer options:
A.He can assign a negative value to the “spark.driver.maxResultSize” application property. B.He can assign a maximum size value of the “spark.driver.maxResultSize” property. C.He can assign the null value of the “spark.driver.maxResultSize” property of the spark-submit jobs. D.He can apply a positive value to the “spark.driver.maxResultSize” property to define a specific size of the spark-submit jobs.