SlideShare a Scribd company logo
Serverless:
the next major shift in cloud computing
Doug Vanderweide
@dougvdotcom
doug@linuxacademy.com
I am Doug Vanderweide
MCSE, MCSD, CTT+
20+ years in software architecture,
development and DevOps
Azure SME and instructor for
LinuxAcademy.com
@dougvdotcom
linkedin.com/in/dougvdotcom
HELLO!
Containers are the hot new thing
And microservices are what makes them so great
1.
TRADITIONAL N-TIER WEB APP
MICROSERVICES
» Break work into
smaller steps
» Create APIs to handle
each of these smaller
steps
MICROSERVICES ARCHITECTURE
MICROSERVICE BENEFITS
» Manage functionality independently
» Streamlines development
» Makes code reusable among solutions
» Works best when it runs in small, virtualized
environments like containers
“You simply pack your code and its dependencies into a
container that can then run anywhere — and because
they are usually pretty small, you can pack lots of
containers onto a single computer.
-- TechCrunch
CONTAINERS AND MICROSERVICES
THE BENEFITS OF CONTAINERS
» Fast deployment
» Cheap to run
» Quick to scale
» Automated
» Versioning
» Easily orchestrated
CONTAINERS HAVE THEIR PROBLEMS
» Rapidly changing ecosystem
» Difficult security management
» Sprawl
» Broken dependencies
» Networking difficulties
Serverless is the future
And for even better reasons
2.
WHAT IS SERVERLESS?
Yes, there's
a server!
But you don't manage it
-- you just create code
that will run on it.
WHAT IS SERVERLESS COMPUTING
» Anonymous, generalized virtual machine instances
» Completely managed by the cloud provider
» Provisioned when you need them, deprovisioned
when you're done
» Billed based on executions and resource
consumption, not an hourly rate
SERVERLESS IS MADE FOR MICROSERVICES
» Microservices != monoliths
» Focus on triggers, inputs and outputs
» Scale fast to demand
» Highly available
FUNCTION WORKFLOW
SERVERLESS FUNCTION FEATURES
» Base OS (Linux, Windows) with a generalized config
» Supports any code written in a given language:
Node.js, Python, .NET Core, Java, etc.
» Provider can quickly provision these instances
because they're all the same
» Instance started > code retrieved > code executed >
instance deprovisioned
PRICING DIFFERENCES
» VMs: Pay per CPU core, memory use, disk storage,
software fees
» Containers: Also pay for VM use, but pack more work
into the same VM
» Serverless: Pay for the resources you actually use
Serverless is best for TCO
Cheap to run, easy to manage
3.
VM vs FUNCTION PRICING
Azure VM
D2v2
AWS VM
t2.large
Azure
Function
AWS
Lambda
$104.16 $69.94 $25.60 $26.67
Assumption: 500,000 executions per month, 4 GB-
seconds for each execution
VM vs FUNCTION PRICING
Azure VM
A4m v2
AWS VM
t2.2xlarge
Azure
Function
AWS
Lambda
$220.97 $279.74 $121.80 $129.86
Assumption: 2 million executions per month, 4 GB-
seconds for each execution
VM vs FUNCTION PRICING
Azure VM
A1v2
AWS VM
t2.small
Azure
Function
AWS
Lambda
$31.99 $17.12 FREE FREE
Assumption: 20,000 executions per month, 512 MB-
seconds for each execution
THE LONG TAIL OF SERVERLESS
» Everything the same = dirt cheap to provide
» Each new instance is effectively profitable
» Only a small number of users need to exceed the free
threshold periodically to turn a major profit
» Long-tail pricing model
SERVERLESS VS VMS/CONTAINERS
» Similar workloads cost
less on functions
» You don't pay for
unused capacity
» No more zombies!
Not quite No Ops, but close
Drastically reduce lead times and staffing requirements
NO OPS
» Automation, abstraction and cloud vendor services
eliminate several DevOps tasks (and positions)
» Sprint based develop, build, test and deploy
» Focus is shifted to rapid development
» Continuous integration / deployment
SERVERLESS IS HIGHLY AVAILABLE AND SCALABLE
» Bad code downtime limited by microservices
» Functions scale automatically and quickly
» High availability is built in
» Regional outage is the only real threat
SERVERLESS BENEFITS: RECAP
» Lower real infrastructure costs
» Easier SDLC via modular workloads/microservices
» No servers to manage
» Faster deployment via CI/CD/automation
» HA/DR built in
» Usual cloud-based business continuity strategies
Sold on serverless
They believe!
4.
“With AWS Lambda, we eliminate the need to worry
about operations. We just write code, deploy it, and it
scales infinitely; no one really has to deal with
infrastructure management. The size of our team is half
of what is normally needed to build and operate a site
of this scale.
-- Tyler Love, CTO, Bustle
“In 5 years, every modern business will have a
substantial portion of their systems running the cloud.
But that’s only the first step.
The next step comes when you free your developers
from the tedious work of configuring and deploying
even virtual cloud-based servers.
-- Greg DeMichillie, Head of Developer Platform and
Infrastructure, Adobe
WORKFLOWS FOR THE MASSES
» What if everyone could program?
» Microservices are the building blocks of workflows
» AI/big data are already tacking semantics
» Orchestrate your vision, yourself
Serverless: The next major shift in cloud computing
“The combination of multi-device, AI
everywhere and serverless computing is
driving this new era of intelligent cloud and
intelligent edge.
-- Microsoft
Serverless isn't for everything
A wholesale change with big up-front costs
5.
BIG UP-FRONT COSTS
» Microservices mean
rebuilding workloads
» Huge up-front costs
» Requires revisiting
existing partnerships
» N-tier ports well to
containers
NOT FOR EVERYTHING
» Small, non-scaling workloads
» Solutions that depend on the environment/many
services
» Massive, constant computing power requirements
WEAKNESSES OF SERVERLESS
» Laggy startups for cold code
» Lag/drops in microservice communication
» Immature technology
» Somewhat wedded to the vendor
» Restrictions in code you can run
» Somewhat limited library access
» Event-input-output model might not work
SUMMARY
» Serverless is the next wave in cloud computing
» Huge time and cost savings, low TCO
» Significant benefits to cloud vendors
» Built-in HA/DR, business continuity is simple
» Fast deployment and sensible architecture
» But it's not for every workload
LINUX ACADEMY IS HIRING!
» DevOps pros wanted to teach with passion for
student success
» AWS, CloudFoundry and Chef are immediate needs
» 100 percent remote if you want
» Generous pay, great benefits, bonuses, training &
conferences
» See me or stop by Booth 416
CREDITS
Special thanks to all the people who made and released these awesome
resources for free:
» Presentation template by SlidesCarnival
» Photographs by Unsplash and Pixabay
Any questions?
You can find me at:
» @dougvdotcom
» doug@linuxacademy.com
» linkedin.com/in/dougvdotcom
THANKS!

More Related Content

What's hot (20)

PDF
Choisir le bon business model et la bonne licence pour la survie de son proje...
Open Source Experience
 
PDF
The rise of microservices
Cloud Technology Experts
 
ODP
Bluemix overview with Internet of Things
Eric Cattoir
 
PDF
Multilanguage Pipelines with Jenkins, Docker and Kubernetes (Oracle Code One ...
Jorge Hidalgo
 
PDF
Building advanced Chats Bots and Voice Interactive Assistants - Stève Sfartz ...
Codemotion
 
PDF
'The History of Metrics According to me' by Stephen Day
Docker, Inc.
 
PDF
Running CI/CD with VMWare Cloud PKS and Jenkins X
Cojan van Ballegooijen
 
PPTX
Integrated Building Design
Kareem Sherif
 
PDF
VMblog - 2018 Containers Predictions from 16 Industry Experts
vmblog
 
PPT
Kelis king - engineering approach to develop software.
KelisKing
 
PPTX
Infrastructure design for Kubernetes
Guillaume Morini
 
PDF
Optimizing the Ops in DevOps
Gordon Haff
 
PPT
Bluemix Overview
Susann Heidemueller
 
PPTX
Agile Network India | Continuous Integration & Continuous Deployment & Automa...
AgileNetwork
 
PPTX
NRB Vmware vForum 2019
NRB
 
PDF
Red Hat OpenShift - a foundation for successful digital transformation
Eric D. Schabell
 
PPT
Bluemix the digital innovation platform
Jose Pena
 
PPTX
Deploying A Proof Of Stake App On IBM Cloud Using Tendermint
Kunal Malhotra
 
PDF
Building cloud native microservices
Brian Pulito
 
PDF
IBM BlueMix Architecture and Deep Dive (Powered by CloudFoundry)
Animesh Singh
 
Choisir le bon business model et la bonne licence pour la survie de son proje...
Open Source Experience
 
The rise of microservices
Cloud Technology Experts
 
Bluemix overview with Internet of Things
Eric Cattoir
 
Multilanguage Pipelines with Jenkins, Docker and Kubernetes (Oracle Code One ...
Jorge Hidalgo
 
Building advanced Chats Bots and Voice Interactive Assistants - Stève Sfartz ...
Codemotion
 
'The History of Metrics According to me' by Stephen Day
Docker, Inc.
 
Running CI/CD with VMWare Cloud PKS and Jenkins X
Cojan van Ballegooijen
 
Integrated Building Design
Kareem Sherif
 
VMblog - 2018 Containers Predictions from 16 Industry Experts
vmblog
 
Kelis king - engineering approach to develop software.
KelisKing
 
Infrastructure design for Kubernetes
Guillaume Morini
 
Optimizing the Ops in DevOps
Gordon Haff
 
Bluemix Overview
Susann Heidemueller
 
Agile Network India | Continuous Integration & Continuous Deployment & Automa...
AgileNetwork
 
NRB Vmware vForum 2019
NRB
 
Red Hat OpenShift - a foundation for successful digital transformation
Eric D. Schabell
 
Bluemix the digital innovation platform
Jose Pena
 
Deploying A Proof Of Stake App On IBM Cloud Using Tendermint
Kunal Malhotra
 
Building cloud native microservices
Brian Pulito
 
IBM BlueMix Architecture and Deep Dive (Powered by CloudFoundry)
Animesh Singh
 

Similar to Serverless: The next major shift in cloud computing (20)

PDF
The Next Big Thing: Serverless
Doug Vanderweide
 
PPTX
Accelerate DevOps/Microservices and Kubernetes
Rick Hightower
 
PDF
Cloud Computing as Innovation Hub - Mohammad Fairus Khalid
OpenNebula Project
 
PDF
Build, run, and scale your Java applications end to end
Otávio Santana
 
PDF
.NET Cloud-Native Bootcamp- Los Angeles
VMware Tanzu
 
PDF
[Capitole du Libre] #serverless -  mettez-le en oeuvre dans votre entreprise...
Ludovic Piot
 
PDF
Stop calling everything serverless!
Jeremy Daly
 
PPTX
Cloud computing
Siddiq Abu Bakkar
 
PDF
Stay productive while slicing up the monolith
Markus Eisele
 
PPTX
Serverless-Computing-The-Future-of-Backend-Development
Ozias Rondon
 
PDF
Stay productive while slicing up the monolith
Markus Eisele
 
PDF
Developing Hybrid Cloud Applications
Daniel Berg
 
PPTX
Lightening the burden of cloud resources administration: from VMs to Functions
EUBrasilCloudFORUM .
 
PDF
Java Agile ALM: OTAP and DevOps in the Cloud
MongoDB
 
PDF
Building Scalable Applications with NodeJs and Serverless Computing.pdf
MobMaxime
 
PPT
Cloud computing
Pallavi Rai
 
PPTX
Accelerate Delivery: Business case for Agile DevOps, CI/CD and Microservices
Rick Hightower
 
PPSX
Microservices Docker Kubernetes Istio Kanban DevOps SRE
Araf Karsh Hamid
 
PPTX
Node.js meetup at Palo Alto Networks Tel Aviv
Ron Perlmuter
 
PPTX
AWS Serverless Community Day Keynote and Vendia Launch 6-26-2020
Tim Wagner
 
The Next Big Thing: Serverless
Doug Vanderweide
 
Accelerate DevOps/Microservices and Kubernetes
Rick Hightower
 
Cloud Computing as Innovation Hub - Mohammad Fairus Khalid
OpenNebula Project
 
Build, run, and scale your Java applications end to end
Otávio Santana
 
.NET Cloud-Native Bootcamp- Los Angeles
VMware Tanzu
 
[Capitole du Libre] #serverless -  mettez-le en oeuvre dans votre entreprise...
Ludovic Piot
 
Stop calling everything serverless!
Jeremy Daly
 
Cloud computing
Siddiq Abu Bakkar
 
Stay productive while slicing up the monolith
Markus Eisele
 
Serverless-Computing-The-Future-of-Backend-Development
Ozias Rondon
 
Stay productive while slicing up the monolith
Markus Eisele
 
Developing Hybrid Cloud Applications
Daniel Berg
 
Lightening the burden of cloud resources administration: from VMs to Functions
EUBrasilCloudFORUM .
 
Java Agile ALM: OTAP and DevOps in the Cloud
MongoDB
 
Building Scalable Applications with NodeJs and Serverless Computing.pdf
MobMaxime
 
Cloud computing
Pallavi Rai
 
Accelerate Delivery: Business case for Agile DevOps, CI/CD and Microservices
Rick Hightower
 
Microservices Docker Kubernetes Istio Kanban DevOps SRE
Araf Karsh Hamid
 
Node.js meetup at Palo Alto Networks Tel Aviv
Ron Perlmuter
 
AWS Serverless Community Day Keynote and Vendia Launch 6-26-2020
Tim Wagner
 
Ad

Recently uploaded (20)

PPTX
一比一原版(SUNY-Albany毕业证)纽约州立大学奥尔巴尼分校毕业证如何办理
Taqyea
 
PPTX
原版西班牙莱昂大学毕业证(León毕业证书)如何办理
Taqyea
 
PDF
Build Fast, Scale Faster: Milvus vs. Zilliz Cloud for Production-Ready AI
Zilliz
 
PPTX
Optimization_Techniques_ML_Presentation.pptx
farispalayi
 
PPT
Agilent Optoelectronic Solutions for Mobile Application
andreashenniger2
 
PDF
AI_MOD_1.pdf artificial intelligence notes
shreyarrce
 
PPTX
INTEGRATION OF ICT IN LEARNING AND INCORPORATIING TECHNOLOGY
kvshardwork1235
 
PPTX
法国巴黎第二大学本科毕业证{Paris 2学费发票Paris 2成绩单}办理方法
Taqyea
 
PPTX
Lec15_Mutability Immutability-converted.pptx
khanjahanzaib1
 
PPTX
L1A Season 1 Guide made by A hegy Eng Grammar fixed
toszolder91
 
PPTX
Cost_of_Quality_Presentation_Software_Engineering.pptx
farispalayi
 
PPTX
本科硕士学历佛罗里达大学毕业证(UF毕业证书)24小时在线办理
Taqyea
 
PDF
Web Hosting for Shopify WooCommerce etc.
Harry_Phoneix Harry_Phoneix
 
PPTX
Research Design - Report on seminar in thesis writing. PPTX
arvielobos1
 
PDF
DevOps Design for different deployment options
henrymails
 
PPTX
PM200.pptxghjgfhjghjghjghjghjghjghjghjghjghj
breadpaan921
 
PPT
introductio to computers by arthur janry
RamananMuthukrishnan
 
PPTX
sajflsajfljsdfljslfjslfsdfas;fdsfksadfjlsdflkjslgfs;lfjlsajfl;sajfasfd.pptx
theknightme
 
PDF
Apple_Environmental_Progress_Report_2025.pdf
yiukwong
 
PPTX
英国假毕业证诺森比亚大学成绩单GPA修改UNN学生卡网上可查学历成绩单
Taqyea
 
一比一原版(SUNY-Albany毕业证)纽约州立大学奥尔巴尼分校毕业证如何办理
Taqyea
 
原版西班牙莱昂大学毕业证(León毕业证书)如何办理
Taqyea
 
Build Fast, Scale Faster: Milvus vs. Zilliz Cloud for Production-Ready AI
Zilliz
 
Optimization_Techniques_ML_Presentation.pptx
farispalayi
 
Agilent Optoelectronic Solutions for Mobile Application
andreashenniger2
 
AI_MOD_1.pdf artificial intelligence notes
shreyarrce
 
INTEGRATION OF ICT IN LEARNING AND INCORPORATIING TECHNOLOGY
kvshardwork1235
 
法国巴黎第二大学本科毕业证{Paris 2学费发票Paris 2成绩单}办理方法
Taqyea
 
Lec15_Mutability Immutability-converted.pptx
khanjahanzaib1
 
L1A Season 1 Guide made by A hegy Eng Grammar fixed
toszolder91
 
Cost_of_Quality_Presentation_Software_Engineering.pptx
farispalayi
 
本科硕士学历佛罗里达大学毕业证(UF毕业证书)24小时在线办理
Taqyea
 
Web Hosting for Shopify WooCommerce etc.
Harry_Phoneix Harry_Phoneix
 
Research Design - Report on seminar in thesis writing. PPTX
arvielobos1
 
DevOps Design for different deployment options
henrymails
 
PM200.pptxghjgfhjghjghjghjghjghjghjghjghjghj
breadpaan921
 
introductio to computers by arthur janry
RamananMuthukrishnan
 
sajflsajfljsdfljslfjslfsdfas;fdsfksadfjlsdflkjslgfs;lfjlsajfl;sajfasfd.pptx
theknightme
 
Apple_Environmental_Progress_Report_2025.pdf
yiukwong
 
英国假毕业证诺森比亚大学成绩单GPA修改UNN学生卡网上可查学历成绩单
Taqyea
 
Ad

Serverless: The next major shift in cloud computing

  • 1. Serverless: the next major shift in cloud computing Doug Vanderweide @dougvdotcom [email protected]
  • 2. I am Doug Vanderweide MCSE, MCSD, CTT+ 20+ years in software architecture, development and DevOps Azure SME and instructor for LinuxAcademy.com @dougvdotcom linkedin.com/in/dougvdotcom HELLO!
  • 3. Containers are the hot new thing And microservices are what makes them so great 1.
  • 5. MICROSERVICES » Break work into smaller steps » Create APIs to handle each of these smaller steps
  • 7. MICROSERVICE BENEFITS » Manage functionality independently » Streamlines development » Makes code reusable among solutions » Works best when it runs in small, virtualized environments like containers
  • 8. “You simply pack your code and its dependencies into a container that can then run anywhere — and because they are usually pretty small, you can pack lots of containers onto a single computer. -- TechCrunch
  • 10. THE BENEFITS OF CONTAINERS » Fast deployment » Cheap to run » Quick to scale » Automated » Versioning » Easily orchestrated
  • 11. CONTAINERS HAVE THEIR PROBLEMS » Rapidly changing ecosystem » Difficult security management » Sprawl » Broken dependencies » Networking difficulties
  • 12. Serverless is the future And for even better reasons 2.
  • 13. WHAT IS SERVERLESS? Yes, there's a server! But you don't manage it -- you just create code that will run on it.
  • 14. WHAT IS SERVERLESS COMPUTING » Anonymous, generalized virtual machine instances » Completely managed by the cloud provider » Provisioned when you need them, deprovisioned when you're done » Billed based on executions and resource consumption, not an hourly rate
  • 15. SERVERLESS IS MADE FOR MICROSERVICES » Microservices != monoliths » Focus on triggers, inputs and outputs » Scale fast to demand » Highly available
  • 17. SERVERLESS FUNCTION FEATURES » Base OS (Linux, Windows) with a generalized config » Supports any code written in a given language: Node.js, Python, .NET Core, Java, etc. » Provider can quickly provision these instances because they're all the same » Instance started > code retrieved > code executed > instance deprovisioned
  • 18. PRICING DIFFERENCES » VMs: Pay per CPU core, memory use, disk storage, software fees » Containers: Also pay for VM use, but pack more work into the same VM » Serverless: Pay for the resources you actually use
  • 19. Serverless is best for TCO Cheap to run, easy to manage 3.
  • 20. VM vs FUNCTION PRICING Azure VM D2v2 AWS VM t2.large Azure Function AWS Lambda $104.16 $69.94 $25.60 $26.67 Assumption: 500,000 executions per month, 4 GB- seconds for each execution
  • 21. VM vs FUNCTION PRICING Azure VM A4m v2 AWS VM t2.2xlarge Azure Function AWS Lambda $220.97 $279.74 $121.80 $129.86 Assumption: 2 million executions per month, 4 GB- seconds for each execution
  • 22. VM vs FUNCTION PRICING Azure VM A1v2 AWS VM t2.small Azure Function AWS Lambda $31.99 $17.12 FREE FREE Assumption: 20,000 executions per month, 512 MB- seconds for each execution
  • 23. THE LONG TAIL OF SERVERLESS » Everything the same = dirt cheap to provide » Each new instance is effectively profitable » Only a small number of users need to exceed the free threshold periodically to turn a major profit » Long-tail pricing model
  • 24. SERVERLESS VS VMS/CONTAINERS » Similar workloads cost less on functions » You don't pay for unused capacity » No more zombies!
  • 25. Not quite No Ops, but close Drastically reduce lead times and staffing requirements
  • 26. NO OPS » Automation, abstraction and cloud vendor services eliminate several DevOps tasks (and positions) » Sprint based develop, build, test and deploy » Focus is shifted to rapid development » Continuous integration / deployment
  • 27. SERVERLESS IS HIGHLY AVAILABLE AND SCALABLE » Bad code downtime limited by microservices » Functions scale automatically and quickly » High availability is built in » Regional outage is the only real threat
  • 28. SERVERLESS BENEFITS: RECAP » Lower real infrastructure costs » Easier SDLC via modular workloads/microservices » No servers to manage » Faster deployment via CI/CD/automation » HA/DR built in » Usual cloud-based business continuity strategies
  • 29. Sold on serverless They believe! 4.
  • 30. “With AWS Lambda, we eliminate the need to worry about operations. We just write code, deploy it, and it scales infinitely; no one really has to deal with infrastructure management. The size of our team is half of what is normally needed to build and operate a site of this scale. -- Tyler Love, CTO, Bustle
  • 31. “In 5 years, every modern business will have a substantial portion of their systems running the cloud. But that’s only the first step. The next step comes when you free your developers from the tedious work of configuring and deploying even virtual cloud-based servers. -- Greg DeMichillie, Head of Developer Platform and Infrastructure, Adobe
  • 32. WORKFLOWS FOR THE MASSES » What if everyone could program? » Microservices are the building blocks of workflows » AI/big data are already tacking semantics » Orchestrate your vision, yourself
  • 34. “The combination of multi-device, AI everywhere and serverless computing is driving this new era of intelligent cloud and intelligent edge. -- Microsoft
  • 35. Serverless isn't for everything A wholesale change with big up-front costs 5.
  • 36. BIG UP-FRONT COSTS » Microservices mean rebuilding workloads » Huge up-front costs » Requires revisiting existing partnerships » N-tier ports well to containers
  • 37. NOT FOR EVERYTHING » Small, non-scaling workloads » Solutions that depend on the environment/many services » Massive, constant computing power requirements
  • 38. WEAKNESSES OF SERVERLESS » Laggy startups for cold code » Lag/drops in microservice communication » Immature technology » Somewhat wedded to the vendor » Restrictions in code you can run » Somewhat limited library access » Event-input-output model might not work
  • 39. SUMMARY » Serverless is the next wave in cloud computing » Huge time and cost savings, low TCO » Significant benefits to cloud vendors » Built-in HA/DR, business continuity is simple » Fast deployment and sensible architecture » But it's not for every workload
  • 40. LINUX ACADEMY IS HIRING! » DevOps pros wanted to teach with passion for student success » AWS, CloudFoundry and Chef are immediate needs » 100 percent remote if you want » Generous pay, great benefits, bonuses, training & conferences » See me or stop by Booth 416
  • 41. CREDITS Special thanks to all the people who made and released these awesome resources for free: » Presentation template by SlidesCarnival » Photographs by Unsplash and Pixabay
  • 42. Any questions? You can find me at: » @dougvdotcom » [email protected] » linkedin.com/in/dougvdotcom THANKS!

Editor's Notes

  • #2: Welcome! Today, we’re going to talk about the next big innovation in cloud computing: Serverless.
  • #3: First, about me. I'm Doug Vanderweide and I am a Microsoft Certified Solutions Expert: Cloud Platform and Infrastructure, Microsoft Certified Solutions Developer: Azure Solutions Architect, and a CompTIA CTT+ Certified Technical Trainer. I have 20+ years experience in software architecture, development and DevOps. My current role is teaching the Microsoft Azure platform to students and enterprises at LinuxAcademy.com You can find me on Twitter and LinkedIn through the username dougvdotcom
  • #4: Seems like these days, every time someone talks about cloud technology, they're talking about containers. I'm sure you're going to hear a lot about them over the course of Cloud Expo. And that's great! Because containers are, without question, a nearly perfect fix to the challenges around devops.
  • #5: Let me illustrate what a serverless solution might look like and how it’s different from traditional and containerized applications. First, let’s consider a traditional n-tier application. You have a Web server farm on the front end, and a series of databases on the back end. In the middle is your middleware, with application servers that process the inbound web requests and works with your back-end data store. In this model, each of your web servers and app servers are actual bare-metal servers or virtual machines. But we can replicate this same pattern to the cloud, fairly easily, using containers.
  • #6: The next step is to use a microservice, to break apart these monoliths. In this pattern, rather than having servers that render all our user interfaces and, for example, handle all of our business objects and logic with a single application, we would create several application programming interfaces, or APIs.
  • #7: Suppose I run a hotel chain. I need public-facing services to allow users to log in to my website, to reserve rooms, to sign up for promotional emails through a third-party mailing list provider, and a general means of delivering web and mobile app pages. I could create an APIs that would handle user data requests, shown here in white. Rather than having a traditional, server-rendered user interface, we could create single page web applications and mobile apps that use this API to get the information needed to present information in a way that's appropriate for that platform. That removes entire development chains from my workload: My designers and front-end developers can make their interfaces look nice and perform efficiently, and my back-end team needs only focus on creating a single means of providing them both with the same information! We might further break our application up into additional APIs: One that handles user logins. Another that handles reservation requests. A third that leverages a partner API to sign people up for promotional emails. And so on.
  • #8: That would give me complete control over each of these units. I can manage them independently. If I need to completely overhaul the reservation system, I don’t need to worry about how that affects the user interface API or email API or authentication API, because they are separated from that reservations API. This also simplifies development because I can focus on only those features that need improvement, without having to test and QA the entire functionality of my application. And I can reuse these parts, too. I could develop an entirely different solution that needs authentication, and use this authentication API microservice to provide it. Or I could build automated systems that seek out unreserved, soon-to-expire room availability in my reservation API, and sells remaining inventory at a discount. And so on. Each of these smaller components is a microservice. It would be wildly impractical, not to mention expensive, for each of these services to live on their own bare-metal servers, or even their own virtual machines. But because I can size containers to the work they need to do, I can deliver this kind of architecture via containers, reduce my infrastructure footprint and lower costs significantly.
  • #9: What's not to love about containers from the business viewpoint? Containers are great because they let us package up everything -- operating system, code, services, all the stuff our application needs to work -- and run them just about anyplace. This not only streamlines the DevOps pipeline significantly, providing a huge cost savings in time and staff, it also gives us options on how to deploy solutions, many of which are now free of specific vendors. Plus, as TechCrunch notes here, we can run several containers on a single host machine, which means we’re wasting far fewer resources and, in turn, money!
  • #10: So I can use containers to power these microservices. That is, my user interface API could be running in Container A, and my reservations API in container B, and my email newsletter API in Container C, and my authentication API in Container D. As you'll learn throughout this conference, containers make all of this highly portable and, if properly built, highly scalable and even fairly resilient to failure.
  • #11: And what's not to love about container technology from the DevOps perspective? You can deploy them quickly. From some base configurations, you can install the software and services you need, configure the environment to support your application, and script the entire process. It doesn't take a team of engineers entire days to get each new environment set up; you script it once and it's done. That makes them relatively inexpensive to operate. Provided your application is properly engineered to scale to demand, scaling with containers is as simple as starting up another instance of your application image. You can even automate this process; your applications can respond to real-time demands and scale to meet that demand. But more importantly, you can automate the build and deployment processes. Containers work great within automated build processes and application lifecycle management. You can version containers easily, making reversion simple and again, simplifying your build process. And all of this is easily orchestrated using open-source or low-cost tools and processes.
  • #12: But there are downsides to containers, not the least of them being that containerization is still a young technology, so there's a lot of change. I don't think you need to worry about Docker or Mesos going away anytime soon, but they could change significantly. And there's anything but consensus on the best way to manage container images within a deployment pipeline. By their nature, containers need to run with elevated permissions on their guest machines. This makes them more dangerous if they are successfully exploited than, for example, a single virtual machine being compromised on a guest server. It's also easy to wind up with more containers than you need, and orphaned containers that once performed some workload, but are no longer needed and were never cleaned up. There are some studies out there that estimate a quarter to a third of all cloud-based virtual machines are zombies -- and it's not difficult to see containers having the same problem. If anything, because containers are so quick and easy to create, and are designed in large part to handle disposable workloads, the problem could be even greater. Most containers require external assemblies -- that is, "helper" software that enables their primary applications to run. It can be difficult to ensure the repositories where these assemblies live are online when a container is created from an image, and we can run into circumstances where a container's dependence on these assemblies can become broken. Finally, making secure and efficient network connections with certain container technologies, such as Docker, requires a practiced hand.
  • #13: So what if I told you there is a third way to the cloud -- one that eliminates almost all of the problems with traditional infrastructure and most of the problems with containers, and does so for a total cost of ownership that represents a fraction of the costs of either? That's the promise of serverless technology.
  • #14: Let's be clear: Serverless technology doesn't mean there aren't any servers running your code. Of course there are! Instead, the promise of serverless is that you no longer need to manage servers. Instead, you will focus on your code alone -- and the provider will handle the rest.
  • #15: Like all buzzwords, serverless computing has some shifting definitions, depending on to whom you're speaking. But the central concepts are the same among all the vendors: A serverless computing offering provides a generalized and anonymous operating system on which your code will run. We'll get into this more in a moment. These instances are completely managed by the cloud provider. They do all the patching, tuning, service installations, whatever. You just write code that runs on this service, and the cloud provider makes sure that your code will run in that environment. Each of these anonymous, generalized environments is provisioned only when it's needed. And as soon as your code is no longer needed, that environment is deprovisioned. Finally, you pay only for your code's actual use: The number of times your code has been executed, and the amount of computing resources it required.
  • #16: But if I have all these relatively simple tasks, or microservices, that I mix and match into solutions, what sense does it make to continue to deliver them as though they are entire server applications? Sure, a container gives us flexibility in how we create and manage our server-based solutions -- but it's still managing the architecture as though a single instance is managing the entire workflow. That's where serverless technology comes in. Serverless is a new approach to workflows. It says, "There will be some event that invokes my code. And when that event happens, I will likely retrieve input from something, and likely create some sort of output. And that's all I need to concern myself with." Because of this, a serverless application can scale quickly to demand. And serverless is, because of its design, highly available. Let me delve into this with an example.
  • #17: Here’s a typical example of how a serverless function works. In serverless technology, the on-demand running of code is called functions as a service, and they are called that because each of these on-demand executions serves a specific purpose, or function. In this example, we have created a function that handles a web request. First, when the inbound request is received, the cloud provider looks to see if there is an available instance of our function already running. If not, it creates one; but if so, it hands this request off to that available function. The function code then parses the web request, processes it somehow, and returns a response to the requesting client. Typically, the cloud provider will then leave that instance running for a few minutes, to speed up the processing of the next request by eliminating the need to spawn another instance. But after about five minutes or so, if there isn't a second request for the same function, the cloud provider will deprovision the instance.
  • #18: I mentioned earlier that serverless functions are hosted on anonymous, generalized operating system instances. That's the key to how they work. The cloud provider customizes a base operating system -- such as Linux or Windows -- with environment and service settings that are designed to work with a specific set of programming languages, such as node JS, or Java, or Python, or dot net core, or the like. The cloud provider can create the instances very quickly because they are all exactly the same. There's nothing special about them; each works exactly the same as any other instance. When you deploy your code, it's saved to the cloud provider's storage service. When your code needs to run, the provider retrieves the code, starts one of these instances, drops your code onto it, and then executes that code. And as we noted before, once the code is done executing, the provider usually leaves the instance running for a little bit, to handle any subsequent requests, but then deprovisions the instance once demand falls off.
  • #19: This methodology leads to a different pricing model than that used for virtual machines or containers. In the case of VMs, you generally pay an hourly fee based on the capacity of the VM in terms of CPU cores and memory. You'll often have software licensing fees bundled in there, too. And you have to pay to store your data and OS disks. The same is generally true of container pricing: You pay for the VMs that host the containers. The difference is, where traditional code on a VM might leave a lot of unused resources, you can pack several containers onto that same VM, to handle multiple workloads for the same price. In the case of serverless functions, you pay for the number of times your code runs, and the amount of computing resources each execution uses.
  • #20: Which brings us to the payoff: What's in it for you? In short, lower actual infrastructure costs, a drastically simplified deployment pipeline, and a total cost of ownership that's a fraction of your current costs.
  • #21: Let's look at the direct hosting costs for functions versus virtual machines. Here, I am going to assume a workload that includes about a half-million executions every month. That's about two requests every second. It's going to take about 4 gigabytes of memory per second to get this work done. For the virtual machines, I therefore need to provision at least eight gigabytes of memory each. In Azure, the cheapest option that meets that need is a D3 v2 VM, which costs about one hundred and four dollars per month to operate, if I am running Linux. For AWS, the least expensive EC2 instance is a T2 large, which costs about seventy dollars per month. But I can cut my costs dramatically if I use serverless functions. In the case of Azure, I can cut my actual infrastructure costs to about a quarter. And using Lambda, I can better than halve my costs versus EC2.
  • #22: If I increase the number of executions to 2 million, I get an even better savings. My VM costs significantly increase to meet the 32 GB memory requiremen. My cost for serverless also increases, narrowing the percentage savings I realize over VMs; but the actual dollar savings is even better. In the case of Azure, I save nearly one hundred dollars per month by using functions. And in the case of AWS, I save almost one hundred and fifty dollars per month.
  • #23: The savings are also pronounced if I have a smaller workload. Let’s change the assumption to be 20,000 requests per month, or about one request every two minutes, that can be processed with a half-gigabyte of RAM every second. In that case, I can use an Azure standard A1 v2 VM for about thirty-two dollars per month, or a t2 small EC2 instance in AWS for about seventeen dollars per month. My cost for Azure functions for that workload? Nothing. Zip. Zilch. Free. And the same is true for Lambda.
  • #24: You're probably asking yourself, "How can AWS and Azure make this service free?" Again, it goes back to the fact that serverless instances are all the same. Because the underlying images are exactly alike, the cloud providers can provision them easily, and each instance actually becomes cheaper to provide than the previous one. So there's huge scale at play. Much as it is with Facebook, in that each new user cost virtually nothing to create and thus, is actually profitable just by virtue of creating the account, the same is true, ultimately, of serverless instances. Each workload cost virtually nothing to deploy, so giving you a large amount of free compute time and instances is profitable, because getting even a minority of users to exceed that free service threshold, just periodically, pays off handsomely. It's a long tail, but one where the profit parabola is very steep. You get a lot of free executions and compute time because it only takes going over those limits a little, for the cloud provider to realize a profit.
  • #25: So, we get significant price savings when we go serverless. Regardless of workloads, big or small, it costs less to host the same amount of work on a serverless function versus a virtual machine. And better yet, with serverless, there are no zombies. Because the service provider handles provisioning and deprovisioning automatically, and you only pay for the number of times your code runs and how much compute resource it uses when running, you never pay for server instances that are doing nothing but eating electricity.
  • #26: Which brings us to the practical benefits of serverless computing: If there are no servers to manage, isn't there less work on your end for building and deploying your solutions? There sure are. It's not really NoOps, but your lead times will be drastically reduced, and the overall work-hours per employee needed to get your solution changes pushed to production will be starkly lowered.
  • #27: So what do I mean by No Ops? Again, it's a buzzword, so definitions vary. But I think most people would agree that at the core of No Ops is the idea that we can use automation, service abstraction and vendor-provided services to reduce or eliminate the tasks we traditionally need to perform in DevOps. For example, you may have an agile process today, in which work is assigned to sprints. Every week or two weeks or whatever, your team implements its changes, which are then built and tested. If they test OK, they get deployed. This involves the work of QA engineers, build engineers, systems engineers, the development team, and so on. And it tends to generate a lot of anxiety, especially when builds fail. In No Ops, we incorporate a much faster pace of development. Using continuous integration and continuous deployment, we deploy changes immediately. Automated builds and testing are used to ensure each feature or fix does not break the solution, and if it does, we quickly adjust the change, repeating this automated build and test processes until the feature or fix works. It's then pushed immediately to staging and production.
  • #28: This fast pace works because our architecture, through microservices, is intentionally distributed. By treating our application and a combination of several small tasks, we can safely work on each of those small tasks without significant concern about any change in a given microservice breaking a different microservice. That lowers our downtime. Also, because functions are automatically provisioned and deprovisioned to meet demand, applications based on them tend to scale quickly and be highly available. That is, we don't really need to monitor a microservice to see if its online or meeting a demand spike. If the cloud provider's underlying serverless service is running properly, they are taking care of demand spikes and sick instances. So by definition, there cannot be downtime in a severless application, unless the cloud provider is experiencing a service outage. Even that we can protect against by deploying the exact same code base to a different region, and using DNS-based routing to ensure failover protection.
  • #29: So to recap the benefits of serverless functions: Actual infrastructure costs are lower. They work through microservices architecture, which is intentionally designed to separate business logic concerns from one another. This makes your software development processes simpler because you can manage your solution's functionality as independent units. Because there are no servers to manage, we can focus solely on code, reducing the staffing and lead times for each code change. In fact, this allows us to adopt a very fast development process focused on automation, abstraction and continuous delivery and continuous integration. And because the cloud provider is handling when instances are provisioned and deprovisioned, we effectively have high availability and disaster recovery built in to our serverless based solution. True, the underlying service could encounter problems, but we can use traditional cloud-based business continuity techniques to manage that circumstance.
  • #30: It's not surprising at all that, given the benefits to cloud provider and customer alike, the big cloud providers, several industry leaders and many startups, are all pushing serverless tech.
  • #31: Here's a quote from Tyler Love, the CTO of Bustle, a news and entertainment website. They rebuilt their monolith websites to be powered by AWS Lambda, using a microservices architecture similar to the one I described earlier. Note the key points: No longer does his team focus on operations. They focus on code. It goes into production and it just plain works, regardless of demand. That's led to staffing savings and productivity far beyond what similarly sized teams are managing.
  • #32: And here's Greg DeMicillie, Adobe's DevOps chief, who notes that moving to the cloud is inevitable. But it's also only the beginning. Like him, I believe the days of having to configure and deploy even virtual machines and containers are numbered, because they inhibit productivity and progress. And if there's anything technology does well on the whole, it's eliminating anything that slows or prevents innovation and speed.
  • #33: In fact, I see serverless computing as a stepping stone, itself: One that promises to make everyone a programmer. Because once we've done away with the arcane work of configuring and deploying servers, we can focus on the automation of code itself. Microservices architecture is, of itself, just the chaining together of several small tasks into a workflow. We can combine these tasks together in new ways, based on new inputs, to create all-new solutions to problems. Already, we are reaching a point, with artificial intelligence and intelligent devices, such as Siri and Google search and Alexa, for example, to understand relatively unstructured requests and produce meaningful responses. Why can't these same kinds of services be used to create code, or, at least, intelligent workflows? Why shouldn't creative people who have visions of producing new and important solutions be able to create these workflows themselves, using the power of modern computing?
  • #34: I wish I could take credit for what I just said. But it's largely borrowed from Satya Nadella's keynote presentation at Microsoft Build, the annual developer conference held in May. Nadella unveiled a vision for the future of computing that consists of two realms: The intelligent edge and the intelligent cloud. The Intelligent edge is all the devices we have that are connected to the Internet and, usually, are powerful enough to think on their own, too. Not only our computers and smart phones and tablets. But our televisions, our appliances, our cars, our medical monitors, our toys, even our tools. From all the input of all these devices, there is a unifying force: The intelligent cloud. As each of our devices speak to the cloud about us and our lives, the cloud used artificial intelligence and big data to derive insights and actions, which it then feeds back to each of our devices on the edge, instructing them on how, and why, to act. In Microsoft's vision, this cloud processing is conducted on serverless platforms, which are infinitely scalable.
  • #35: The combination of multi-device, AI everywhere and serverless computing is driving this new era of intelligent cloud and intelligent edge, Microsoft says. And that's a pretty powerful thought leader.
  • #36: So, now that I've told you serverless is the inevitable robot overlord and your resistance is futile … let's talk about what it can't do, and why virtual machines, containers and the like aren't going away anytime soon
  • #37: Obviously, changing your monolith to a microservices architecture is no small thing. There are huge up-front costs. And chances are you have partnerships or other obligations and requirements, such as data privacy and sovereignty, that limit or prevent your ability to simply abandon the way you’ve been building software to date. Even if you have good basic n-tier architecture, the benefits you'd get from using containers could vastly outweigh the benefits you'd reap from rebuilding your application into portable units.
  • #38: Also, not every workload is appropriate for microservices. If your task doesn't really need to scale -- if it has a predicable workload -- there's a strong argument against using microservices. That’s especially true if your solution is heavily dependent on other services, or you need highly specialized control over your runtime environment. Don’t try to work around the limitations of serverless runtime environments or compensate for a lot of external requests and responses; just build your solution as a package and deploy it inside a container. Or if your software needs massive computing resources running all the time to accomplish its work. Sure, you might save a little money in terms of hosting, but the reliability of constantly provisioned VMs or containers could outweigh the code limitations inherent to serverless functions.
  • #39: Which is a good segue to talk about the limitations of serverless solutions. An obvious problem is that because instances are only provisioned when they are needed, there can be some lag dealing with cold starts on rarely used serverless code. You can fix this to some degree by running probes to keep that code warm, but that's kind of a hack. Maybe it's best to simply put infrequently accessed code into an always-warm container, especially if performance is critical on those rare occasion when the code is called. Additionally, you need to prepare your solution to deal with lag and dropped connections between microservices. For example, if you have an authentication API that runs as a microservice for all your solutions, even a 400 microsecond lag in its communication can become amplified into a crippling, systemwide bug. While containers might be an immature technology, serverless is even more so. Azure Functions aren't yet a year old in general availability. Google's serverless solution is still in beta and IBM's efforts to create an open serverless standard is still in its infancy. Which leads to another concern: Whatever serverless approach you take, it will be somewhat wedded to a vendor at this time. You can pretty much run a container anywhere, but how you get serverless code to run will be somewhat dependent on the languages supported by your cloud provider and the means they use to create serverless instances. You are limited in terms of what code you can run to the runtimes and versions your cloud provider supports, and it can be difficult to bring in certain libraries or assemblies as a result, which might further complicate programming. Finally, the serverless programming model -- of an event that triggers the receipt of some input and the creation of some output -- might not be the right solution for every need.
  • #40: So, to recap: Serverless is the next wave in cloud computing. It offers you huge time and cost savings, and an exceptionally low total cost of ownership, even over containers. You pay only for the compute you use, and there's no more zombie services sucking up your profits. Cloud providers also receive significant benefits by provisioning what is essentially the same servers to everyone who needs them, which promises to keep costs low and improve performance. Because of the automated, generic aspects of serverless technology, high availability and disaster recovery are built in. And you can use traditional cloud-based business continuity techniques to keep running in the event of regional service outages. The entire microservice concept is built around fast deployment, continuous integration, continuous delivery and separated concerns, making serverless an ideal approach to improved service delivery and fast adaptation. But serverless isn't for every workload. It may well be your monolith is better off in a container, or you're better off building a solution in containers. But five years hence, expect serverless to be as hot as containers are now.
  • #43: And that's it! I'm happy to take your questions.