Question 142:
A team has just received a task to build an application that needs to recognize faces in streaming videos. They will get the source videos from a third party that uses a container format (MKV).
Answer options:
A.S3 buckets to store the source MKV videos for AWS Rekognition to process. S3 should be used in this case as it has provided an unlimited, highly available, and durable storing space. Make sure that the third party has the write access to S3 buckets. B.A Kinesis video stream for sending streaming video to Amazon Rekognition Video. This can be done by using Kinesis “PutMedia” API in Java SDK. The PutMedia operation writes video data fragments into a Kinesis video stream that Amazon Rekognition Video consumes. C.An Amazon Rekognition Video stream processor to manage the analysis of the streaming video. It can be used to start, stop, and manage stream processors according to needs. D.Use EC2 or Lambda to call Rekognition API “DetectFaces” with the source videos saved in the S3 bucket. For each face detected, the operation returns face details. These details include a bounding box of the face, a confidence value, and a fixed set of attributes such as facial landmarks, etc. E.After the APP has utilized Rekognition API to fetch the recognized faces from live videos, use S3 or RDS database to store the output from Rekognition. Another Lambda can be used to post-process the result and present it to UI. F.A Kinesis data stream consumer to read the analysis results that Amazon Rekognition Video sends to the Kinesis data stream. The consumer can be autoscaled by running it on multiple EC2 instances under an Auto Scaling group.