Answer: D
Option A is incorrect. This is the type of EC2 instance on which your notebook will run. This won’t help you understand production performance.
Option B is incorrect. The lifecycle configuration allows you to customize your notebook environment with default scripts and plugins. Default jupyter notebook scripts and plugins won’t give you an insight into production performance.
Option C is incorrect. The volume size is just the size of the jupyter instance in GBs. This won’t give you an insight into production performance.
Option D is correct. From the Amazon SageMaker developer guide titled Amazon SageMaker Elastic Inference (EI) “By using Amazon Elastic Inference (EI), you can speed up the throughput and decrease the latency of getting real-time inferences from your deep learning models … You can also add an EI accelerator to an Amazon SageMaker notebook instance so that you can test and evaluate inference performance when you are building your models”. Therefore, while you are in the development stage using jupyter notebooks, Elastic Inference allows you to gain insight into the production performance of your model once it is deployed.
Option E is incorrect. From the Amazon SageMaker developer guide titled CreateModel “... you name the model and describe a primary container. For the primary container, you specify the docker image containing inference code, artifacts (from prior training), and custom environment map that the inference code uses when you deploy the model for predictions.
Use this API to create a model if you want to use Amazon SageMaker hosting services or run a batch transform job.” So the primary container is a parameter used in the CreateModel request when you are creating a model in SageMaker. It is not used when setting up your jupyter notebook.
Reference:
Please see the Amazon SageMaker developer guide titled Amazon SageMaker Elastic Inference (EI), the AWS FAQ titled Amazon Elastic Inference FAQs, and the AWS Machine Learning blog titled Optimizing costs in Amazon Elastic Inference with TensorFlow.