Box 1: Yes - Big data solutions often use long-running batch jobs to filter, aggregate, and otherwise prepare the data for analysis. Usually these jobs involve reading source files from scalable storage (like HDFS, Azure Data Lake Store, and Azure Storage), processing them, and writing the output to new files in scalable storage. Box 2: No - Box 3: No - Reference: https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/batch-processing