Question 349:
A company wants to stream their log files from their EC2 Instances. You are using Kinesis streams and Firehose for this process. The data will be parsed using AWS Lambda and then the resultant data will be stored in AWS Redshift. After the process is complete the amount of data in S3 has increased, and you had to delete the data manually. Since this process will be triggered on a continual basis, you need to ensure the right step is taken to delete the data in S3. How can you accomplish this?
Answer options:
A.Create a Lifecycle policy for the S3 bucket B.Use Redshift triggers to delete the data after the data has finished loading C.Use S3 events to delete the data after the data has finished loading D.Disable S3 logging since this is causing the increase in data