How 2018 AWS Reinvent made our life easy, and will make yours too! 7 takeaways from real customer use cases.

Building apps on AWS will become easier. I think…

This year’s Reinvent was all about one theme:  Easy button for builders. There were a ton of announcements centered around this key theme.

Here are 7  areas that simplified our life — features in action based on actual customer use cases.  It’s about time Amazon! #WhySoLate?

#1 AWS Well Architected Tool

Over the years AWS (and partners like us) have carefully curated best practices and lessons learned when integrating various services into a stack, collectively called the Well Architected Framework.  These were captured in whitepapers and shared with partners and customers.  Customers usually sit through a review with an APN partner (lucky us!) or with an AWS SA to answer questions mapped to the pillars, understand areas of improvement and develop an action plan.  This process was typically an afterthought and a point in time view.  Looks like AWS was doing so many Well Architected Reviews (“WARs”) that they decided to automate it and create a tool. Now you can do this by yourself, at any time during the development process by using the AWS Well Architected Tool.  No need to schedule time with AWS SA or an APN Partner.

Reviews can happen frequently and in a frictionless, self service manner.  We don’t have to fight for these anymore.  Let the power be with the builder.

#2 AWS Lake Formation

Well, you have your own toolset to build data lakes.  We recently helped a customer prep data for their lake, defined processing patterns, created a visualization design and developed a POC. It is a lot of work.  What is unique about AWS Lake Formation is the speed at which we can import, process, query and visualize the data. What used to take 6-8 months can be stood up in as little as 3 sprints.

Fastest way to a Data lake MVP.  Time for our DL team to become more creative.

#3 AWS Quantum Ledger DB and Managed Blockchain:

Recently, we experimented with hyperledger for a customer.  The goal was to create a Distributed Ledger Technology (DLT) for their distributed operation needs.  However, setting up fabric was easy, making it functional, resilient, scalable and performant was not so easy.  It reminded me of building a complex piece of furniture from Ikea.  All the pieces are there, but if you missed a step, you are screwed.  Our experiment was a success, though it was not easy. With AWS QLDB, amazon has signed up to do all the heavy lifting

 AWS has made creating a DLT as simple as spinning up an RDS instance.

#4 Dynamo on demand.

If your workloads have high and spikey throughput, this feature will make your day.  While working on a problem for a customer using DynamoDB, we were faced with two bad options, pre-provision read and write capacity (implying higher cost) or engineer work arounds for  blips that are caused by autoscaling.  We picked the latter.  With this new announcement about DynamoDB on demand, you don’t have to worry anymore about engineering for spikes and blips caused by autoscaling with DynamoDB.  

Dynamo becomes truly serverless.  Our SAs are drooling over this!

#5 Lambda behind ALB

A use case we worked on uses lambda pretty extensively.  As a startup, your main goal in life is to stretch every dollar.  One of our customers has a need to expose APIs (deployed as Lambda functions), currently served out of a custom API Gateway solution. Using the API gateway for fronting our lambda fleet did not help the cost optimization cause.  We are really excited to use ALBs to invoke lambda functions to serve HTTP requests (and all the other new lambda features).  

Lambda becomes mainstream with serious enterprise appeal.

#6 S3 Intelligent Tiering

At our largest customer, we regularly push the limits of S3.  From high throughput, extreme parallelization to millisecond response times for 99.9% of millions of transactions, we have fallen in love with all things S3.  (@mailant thanks for taking us seriously and continuously evolving the service).  With this new announcement, our largest customer no longer needs to worry about object tagging and lifecycle policies for infrequent access.  We are talking about billions of objects a day which translates to huge cost savings by using Intelligent Tiering.

S3 tiering meets autonomous self driving.

#7 Aurora Global Database

One thing that repeatedly comes up when we work with many of our enterprise customers is region failure immunity.  With active migration to Aurora (from Oracle), the key question is how do we get to inter-region RPO in minutes.  This has become such a common ask that our most popular blueprint is the one that shows various options and solutions for this problem.  With Aurora Global Database, creating a global database with less than 1 second latency between regions is the solution to most multi-region problems.  (@awgupta: You just made our SA’s lives so much more interesting).

When can we get this for Aurora Postgres?

How can Terafuze Help?

We help our customers deliver on the promise of cloud.  Recently, we have worked  some really interesting problems that push the limits of AWS.  We have enabled Fortune 50 companies adopt Cloud 2.0 patterns.  Let us help you realize the true benefits of an integrated cloud strategy.  Contact us today to run your current challenge by one of our Solution Architects for free.  Let us compare notes, build something together.