ExamQuestions.com

Register
Login
AWS Certified Big Data Specialty (Expired on July 1, 2020) Exam Questions

Amazon

AWS Certified Big Data Specialty (Expired on July 1, 2020)

308 / 370

Question 308:

A company currently has a database cluster setup in Redshift. They also have an AWS RDS PostgreSQL database setup in place. A table has been setup in PostgreSQL which stores data based on a timestamp. The requirement now is to ensure that the data from the PostgreSQL table gets stored into the Redshift database. For this a staging table has been setup in Redshift. It needs to be ensured that the data lag between the staging and PostgreSQL tables is not greater than 4 hours. Which of the following is the most efficient implementation step you would use for this requirement?

Answer options:

A.Create a trigger in the PostgreSQL table to send new data to a Kinesis stream. Ensure the data is transferred from the Kinesis Stream to the staging table in Redshift.
B.Create a SQL query that is run every hour to check for new data. Use the query results to send the new data to the staging table.
C.Use the extensions available in PostgreSQL and use the dblink facility
D.Create a trigger in the PostgreSQL table to send new data to a Kinesis Firehose stream. Ensure the data is transferred from the Kinesis Firehose Stream to the staging table in Redshift