Question 127:
You work as a machine learning specialist for a book publishing company. Your company has several publishing data stores housed in relational databases across its infrastructure. Your company recently purchased another publishing company and is in the process of merging the two company’s systems infrastructure. A part of this merger activity is joining the two publisher book databases. Your team has been given the assignment to build a data lake sourced from the two company’s relational data stores. How would you construct an ETL pipeline to achieve this goal? (Select FOUR)
Answer options:
A.Use AWS DataSync to ingest the relational data from your book data stores and store it in S3. B.Use an AWS Glue crawler to build your AWS Glue catalog. C.Have a lambda function triggered by an S3 trigger to start your AWS Glue crawler. D.Use an AWS SageMaker trigger to start your AWS Glue ETL job that processes/transforms your data and places it into your S3 data lake. E.Use a lambda function triggered by a CloudWatch event trigger to start your AWS Glue ETL job that processes/transforms your data and places it into your S3 data lake. F.Use AWS Database Migration Service to ingest the relational data from your book data stores and store it in S3.