AWS re:Invent 2020 digest — Part 1

Curated list of my favorite AWS updates from re:Invent 2020

While reInvent just started, the first keynote from Andy Jassy has had a lot of new launches. I know that digesting all the updates takes time and a lot of coffee, so let me help you.

Following is a curated list of things that I found most important; matters related to architecture, scalability, reliability, performance, resiliency, DevOps, and security — anything that caught my eye, and I hope will satisfy yours.

Amazon S3 now delivers strong read-after-write consistency automatically for all applications

This is hands-down my favorite launch!

Amazon S3 now delivers strong read-after-write consistency automatically for all applications for any storage request, without changes to performance or availability, without sacrificing regional isolation for applications, and at no additional cost.

OK — but what does strong read-after-write consistency mean?

After successfully writing a new object or overwriting an existing one, any subsequent read request immediately receives the object’s latest version. S3 also provides strong consistency for list operations, so after a write, you can directly perform a listing of the objects in a bucket with any changes reflected.

To learn more, hear the GM of Amazon S3, Kevin Miller, and Ashish Gandhi with Dropbox, discuss the benefits of strong consistency for S3.

GM of Amazon S3, Kevin Miller, and Ashish Gandhi with Dropbox

Continuing with S3, they were a couple more updates that you might find useful for your DR or multi-region strategy:

AWS Lambda now supports container images as a packaging format

This one is interesting —even controversial — because I know some of the serverless purists out there are feeling betrayed :) But to me, it is a testament of AWS’ obsession to listening to customers. And customers wanted that.

You can now package and deploy AWS Lambda functions as a container image of up to 10 GB.

It means that you can now build Lambda-based applications using your familiar container tooling & workflows, using either a set of AWS base images for Lambda, or using your preferred community or enterprise images.

Suppose you are familiar with container development tools such as the Docker CLI. In that case, you can locally build and test your Lambda based application and push your container image to Amazon ECR. You can then deploy your Lambda function by specifying your Amazon ECR image tag or digest from the repository.

And by the way, Amazon ECR just launched Amazon ECR Public. This fully managed registry makes it easy for a developer to share container software worldwide for anyone to download publicly.

For more information and a deep dive on container image support for Lambda, please read this very detailed post from Danilo.

Babelfish for Amazon Aurora PostgreSQL is Available for Preview

Babelfish for Amazon Aurora is a new translation layer for Amazon Aurora that enables Aurora to understand queries from applications written for Microsoft SQL Server.

By using Babelfish, your applications running on SQL Server can now run directly on Aurora PostgreSQL with little to no code changes. Babelfish understands the SQL Server wire-protocol and T-SQL, the Microsoft SQL Server query language, so you don’t have to switch database drivers or re-write all of your application queries.

This announcement is huge for many of our customers!

And by the way, AWS is open-sourcing Babelfish in 2021. Until then, you can use Babelfish on Amazon Aurora in a preview to see how it works and to get a sense of whether this is the right approach for you.

Here is a full write-up of the launch by Matt Asay!

Introducing the next version of Amazon Aurora Serverless in preview

No secrets here — I love Amazon Aurora, so I am biased.

For those not knowing what Amazon Aurora is, it is a MySQL and PostgreSQL-compatible relational database built for the cloud.

Amazon Serverless Aurora is — as the name implies — the serverless version of Aurora. AWS is now releasing its version 2, with supports for the full breadth of Aurora features, including Global Database, Multi-AZ deployments, and read replicas.

Amazon Aurora Serverless v2, currently in preview, scales instantly from hundreds to hundreds-of-thousands of transactions in a fraction of a second. As Aurora Serverless scales, it adjusts its capacity in fine-grained increments to provide just the right amount of database resources that the application needs. There is no database capacity for you to manage; you pay only for the capacity your application consumes.

Note: Aurora Serverless v2 (Preview) is currently available in preview for Aurora with MySQL compatibility.

Amazon EKS adds support for EC2 Spot Instances in managed node groups

First, for those that don’t know what a Spot Instance is, a Spot Instance is an unused EC2 instance that is available for less than the on-demand price, often at steep discounts, which lets you lower your EC2 bill significantly. Amazon EC2 sets each instance type’s spot price in each Availability Zone and adjusts it gradually based on the long-term supply and demand for Spot Instances.

Second, Amazon EKS is a managed service that makes it easy for you to run Kubernetes on AWS.

And now, Amazon EKS supports creating and managing Amazon EC2 Spot Instances using Amazon EKS managed node groups. This lets you take advantage of the steep savings and scale that Spot Instances provide.

Until now, Amazon EKS customers had to configure Amazon EC2 Auto Scaling groups manually, manage graceful draining of Spot nodes, and upgrade the Spot nodes to the latest Kubernetes versions. With managed node groups, customers get native support for Spot Instances.

Talking of EKS, the new Amazon EKS Distro — an open-source Kubernetes distribution used by Amazon EKS was launched too!

Principal, EC2 Core @awscloud ☁️ I break stuff .. mostly. Opinions here are my own.