Answer – A
An example of this is given in the AWS Documentation
########
What if you have a simple use case, in which you want to run a few Spark jobs in a specific order, but you don’t want to spend time orchestrating those jobs or maintaining a separate application? You can do that today in a serverless fashion using AWS Step Functions. You can create the entire workflow in AWS Step Functions and interact with Spark on Amazon EMR through Apache Livy.
########
Option B is invalid since this is queue-based service
Option C is invalid since this is an open-source, data warehouse, and analytic package that runs on top of a Hadoop cluster
Option D is invalid since this is an open-source Apache library that runs on top of Hadoop, providing a scripting language that you can use to transform large data sets without having to write complex code in a lower level computer language like Java.
For more information on this use case, please visit the url
https://aws.amazon.com/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/