ci_20230921
Quiz by Michael Kirchner
Feel free to use or edit a copy
includes Teacher and Student dashboards
Measure skillsfrom any curriculum
Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill.
- edit the questions
- save a copy for later
- start a class game
- automatically assign follow-up activities based on students’ scores
- assign as homework
- share a link with colleagues
- print as a bubble sheet
- Q1
Meltdown and spectre were ...
vulnerabilities in Intel processors that allowed reading contents of the CPU cache that belong to other processes. This was a potential threat to multi-tenant cloud computing setups.
an announcement of Google Cloud (GCP) to discontinue their IoT Core Service, which has gained wide attention in the cloud industry.
a major fire at the French cloud service provider OVH that made cloud services unavailable to customers for an extended period of time.
a short term for the pricing characteristics of cloud service providers (overpaying for pay-on-demand services, also known as "billing meltdown").
60s - Q2
There is worldwide regulation that cloud service providers are not allowed to use the data that customers upload in any way. As an example, they are not allowed to train their own machine learning models with data that customers provided originally.
falsetrueTrue or False60s - Q3
You are operating a workload on AWS and have a batch process that runs once per week. This process roughly takes 30 minutes to complete. It is not CPU-intensive, but reads some files from S3 and updates RDS database records based on that. The process should not be interrupted to avoid data consistency problems. What would be a suitable compute solution?
Run on EC2 instances or containers
Run on EC2 spot instances
Run on AWS Lambda
Run on CloudShell
60s - Q4
The Azure offering in the area of serverless functions is called
Azure Mobile
Azure Lambda
Azure Cloud Run
Azure Functions
60s - Q5
Select a property of serverless functions that is NOT correct
Serverless functions only charge based on the time they actually execute.
Serverless functions execute based on specific events that the cloud service provider passes into them.
Serverless functions are embedded into a multi-tenant environment: functions of multiple customers of the cloud service provider may run on the same underlying hardware.
Serverless functions execute 24/7, but with very little CPU and RAM assignments, so that costs are kept low. If an event comes in, CPU and RAM allocation is quickly dialed up to allow the function to run.
60s - Q6
You run a workload that is backed by AWS Lambda functions. After a Lambda function calculated a result, you want that this result is cached to speed up the response time in case the very same request comes in again. You want that all instances of the Lambda function that run in the backend benefit from the cache. What is a suitable caching solution for this?
Enable function result caching in the configuration of the Lambda function. If the same kind of request comes in, AWS will automatically return the previously computed response, without invoking your function again.
Cache function results in RAM (global variables you use in the Lambda function code)
Cache function results in the /tmp directory that Lambda functions see
Cache function results in a location where all Lambda function instances can reach out to via the network, for example: RDS, S3, Redis cache, etc.
60s - Q7
In stream processing, the data volume processed is unknown and potentially infinite.
truefalseTrue or False60s - Q8
A stream processing solution basically acts as a queue for data records that data producers have sent into a stream. If a stream consumer application has read an item from the stream, it is removed and not visible any more for other stream consumers.
falsetrueTrue or False60s - Q9
If a stream consumer application always moves forward by an entire window size, so that the analyzed time periods do not overlap, this approach is called a [...]
Sliding window
Tumbling window
Matching window
Simple window
60s - Q10
In stream processing, the term "shard" refers to:
An implementation on AWS how Lambda functions can be automatically triggered when new items are written into a stream.
A way of grouping stream items and to dynamically increase or decrease the capacity the overall stream can handle.
A way how a stream processing solution is organized internally. Producers and consumers do not need to know or care about shard information.
A way how consumers can read more items from a stream with a single API call.
60s