Question 169:
A development team has got a new assignment to maintain and process data in several DynamoDb tables and S3 buckets. They need to perform sophisticated queries and operations across DynamoDB and S3. For example, they would export rarely used data from DynamoDB to Amazon S3 to reduce the storage costs while preserving low latency access required for high-velocity data. The team has rich Hadoop and SQL knowledge. What is the best way to accomplish the assignment?
Answer options:
A.Use an Elastic Beanstalk environment to set up a compute-optimized EC2 instance so that the instance has better performance for SQL commands to query tables or export/import data. B.Use an EMR cluster as it uses Hadoop Hive which is a SQL-based engine. Create external tables in EMR for DynamoDB table and S3 buckets. Use a SQL-based command to export/import data. C.Use a CloudFormation template to utilize a lambda to run SQL commands. Make sure the lambda has enough memory allocated as SQL commands consume high memory resources D.Set up several data pipelines to automatically move data from DynamoDB to S3 if the data has met user-defined conditions such as the items are older than a year. Modify data pipeline configurations when needed.