Adrian’s top AWS updates — re:Invent 2019 digest
reInvent is now over, and as always, digesting all the updates takes time and a lot of coffee!
So, let me help you and give you a curated list of things that I found most important; matters related to architecture, scalability, reliability, performance, resiliency, DevOps, security, or anything else that caught my eye.
Again, no discrimination. I can’t learn, like or talk about everything :)
Probably my favorite launch of re:Invent — The Amazon Builders’ Library is a collection of living articles that take readers under the hood of how Amazon architects, releases, and operates the software underpinning Amazon.com and AWS. The Builders’ Library articles are written by Amazon’s senior technical leaders and engineers, covering topics across architecture, software delivery, and operations.
>> If you want to learn how Amazon automates software delivery to achieve over 150 million deployments a year or how Amazon’s engineers implement principles such as shuffle sharding to build resilient systems that are highly available and fault-tolerant, this is for you!
- Avoiding insurmountable queue backlogs
- Challenges with distributed systems
- Going faster with continuous delivery
- Ensuring rollback safety during deployments
- Static stability using availability zones
- Avoiding fallback in distributed systems
- Caching challenges and strategies
- Leader election in distributed systems
- Timeouts, retries, and backoff with jitter
- Using load shedding to avoid overload
- Implementing health checks
- Workload isolation using shuffle-sharding
- Instrumenting distributed systems for operational visibility
HTTP APIs for Amazon API Gateway (preview)
HTTP APIs for Amazon API Gateway enables customers to quickly build high-performance RESTful APIs that are up to 71% cheaper than the standard REST APIs from API Gateway. HTTP APIs are optimized for building APIs that proxy to AWS Lambda functions or HTTP backends, making them ideal for serverless workloads.
>> In other words, you get the core features of API Gateway at a lower price along with improved developer experience. I can’t stress how good that is! Cheaper, better, easier :-)
Amazon RDS Proxy acts as an intermediary between your serverless application and an RDS database. RDS Proxy establishes and manages the necessary connection pools to your database so that your application creates fewer database connections, thus improving database efficiency and application scalability. In case of a failure, RDS Proxy automatically connects to a standby database instance while preserving connections from your application and reduces failover times for RDS and Aurora multi-AZ databases by up to 66%. Finally, with RDS Proxy, database credentials and access can be managed through AWS Secrets Manager and AWS Identity and Access Management (IAM), eliminating the need to embed database credentials in application code.
>> This is simply an excellent feature. If you are using RDS via Lambda, this is a must, as it is always challenging to ensure that Lambda invocations do not overload relational databases with too many connections.
Provisioned concurrency is a feature that provides greater control over serverless applications. Functions using provisioned concurrency execute with consistent start-up latency making them ideal for building interactive mobile or web-backends, latency-sensitive micro-services, and synchronously invoked APIs. Instead of waiting for new requests to come in before provisioning the underlying resources required to serve them, AWS Lambda will proactively provision them in advance.
>> This is great for customers that need consistent performance up through the higher percentiles of customer experience. With provisioned concurrency, you don’t have to write boilerplate code to pre-warm your function yourself — Lambda takes care of it for you.
AWS Lambda now supports two new features to process asynchronous invocations: Maximum Event Age and Maximum Retry Attempts. When you invoke a function asynchronously, Lambda sends the event to a queue. A separate process reads events from the queue and runs your function. These two new features provide ways to control events retries and retention duration in the queue.
>> More failure handling features is only a good thing, my friends!
ECS Cluster Auto Scaling lets you automatically scale your ECS clusters as needed to meet the resource demands of all tasks and services in your cluster, including scaling to and from zero. Previously, ECS didn’t let you directly manage the scaling of an ASG. Instead, you had to set up scaling policies on your ASG manually outside of ECS, and the metrics available for scaling did not account for the desired task count, only the tasks already running. With ECS Cluster Auto Scaling, the scaling policy of your ASG is managed by ECS through an ECS Capacity Provider. You can configure the Capacity Provider to enable managed scaling of the ASG, reserve excess capacity in the ASG, and also the termination of instances in the ASG.
>> This will improve the reliability, scalability, and cost of running containerized workloads on ECS — That’s just Bueno!
AWS Fargate now supports Fargate Spot, a new deployment option to run fault-tolerant applications with up to 70% discount compared to Fargate prices. ECS tasks running on Fargate Spot leverage spare compute capacity available in the AWS cloud.
>> Fargate Spot is ideal for fault-tolerant use cases such as big data, CI/CD, and batch processing. So, don’t miss that one and start saving money now!
AppConfig is a new capability within AWS Systems Manager that makes it easy for customers to quickly roll out application configurations across applications hosted on EC2 instances, containers, Lambdas, mobile apps, IoT devices, and on-premise servers in a validated, controlled and monitored way. AppConfig gives you the ability to validate an application’s configuration against a schema or pass it through a Lambda function. Adding this validation logic to your application configurations helps ensure your configuration data is syntactically and semantically correct before making it available to your application. The deployment proceeds only if the validation is successful. AppConfig also supports rolling out configuration changes over a defined period while monitoring for errors. It will roll back the changes if an error occurs, minimizing the impact on end-users.
>> I can’t remember how many outages I have witnessed because of configurations — I stopped counting. All this can stop now — Say hello to Configuration-as-Code, my friends!
AWS Compute Optimizer is a new machine learning-based recommendation service that makes it easy for you to ensure that you are using optimal compute resources.
Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning can lead to poor application performance. Compute Optimizer delivers easily actionable EC2 instance recommendations so that you can identify optimal EC2 instance types, including those that are part of Auto Scaling groups, for your workloads, without requiring specialized knowledge or investing substantial time and money.
>> I am all, and I am sure you are too, for paying less and for optimization, so that’s good! And, I am pretty sure it won’t take Corey out of his job :)
Amazon Aurora Global Database is a single database that spans multiple AWS regions, enabling low latency global reads and disaster recovery from region-wide outages. You can add as many as five secondary AWS Regions to your global cluster, expanding the reach of your database worldwide.
Aurora Global Database replicates the writes between the primary region and the secondary AWS Regions with a typical latency of < 1 second. In disaster recovery situations, you can promote any secondary replica to become the new master in under a minute.
Run Kubernetes Pods Using Amazon EKS and AWS Fargate
You can now use Amazon Elastic Kubernetes Service (EKS) to run Kubernetes pods on AWS Fargate. Amazon Elastic Kubernetes Service (EKS) is a managed service that makes it easy to run Kubernetes on AWS. Using Fargate, Kubernetes pods run with just the compute capacity they request, and each pod runs in its VM-isolated environment without sharing resources with other pods. You only pay for the pods you run, when they run, improving the utilization and cost-efficiency of your apps without any additional work.
>> This is a big deal and a long-awaited feature! Who wants to manages Kubernetes? I know I don’t.
Amazon Managed Apache Cassandra Service enables you to run Cassandra workloads in the AWS Cloud using the same Cassandra application code, Apache 2.0–licensed drivers, and tools that you use today.
With Managed Cassandra Service, you don’t have to provision, patch, or manage servers, and you don’t have to install, maintain, or operate the software. Tables can scale up and down automatically based on actual request traffic, with virtually unlimited throughput and storage. Amazon Managed Cassandra Service provides consistent, single-digit-millisecond performance at any scale. Tables are encrypted by default, and data is replicated across multiple AWS Availability Zones for durability and high availability.
>> I used Cassandra in the past and loved it, but hated managing it at scale. Problem solved! I am pretty sure this is going to be a top-rated service.
IAM Access Analyzer is a new feature that makes it simple for security teams and administrators to check that their policies provide only the intended access to resources. Customers can enable IAM Access Analyzer across their account to continuously analyze permissions granted using policies associated with their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.
IAM Access Analyzer continuously monitors policies for changes, meaning customers no longer need to rely on periodic manual checks to catch issues as policies are added or updated.
>> This is such a big deal for security and governance teams! I have no idea why it is not all over the place.