Building
Cloud-Native
Applications
Using Azure
Kubernetes
Service (AKS)
Dennis Moon
Agenda
• Cloud Native Application Development
• Containers
• Docker
• Container Registries
• Azure Container Registry (ACR)
• Kubernetes
• Azure Kubernetes Service (AKS)
• Azure DevOps
What Is Cloud Native
Development?
Cloud native technologies
empower organizations to
build and run scalable
applications in modern,
dynamic environments such
as:
• Public, private, and hybrid
clouds
• Containers
• Service meshes
• Microservices
• Serverless apps
• Immutable infrastructure
• Declarative APIs
What Is A 12-Factor Application?
• Codebase
• Dependencies
• Config
• Backing Services
• Build, Release, Run
• Processes
• Port Binding
• Concurrency
• Disposability
• Dev/Prod Parity
• Logs
• Admin Processes
12-Factor Applications
What Is A Modern Application?
• Supports multiple client types
• Provides an API for accessing data through services
• Provides data in a generic, consumable format, such as JSON
• Built on top of a modern stack that directly supports this
type of application
• Typically uses a microservices-based application approach
• However, some may prefer to use serverless applications
• Can use combinations of all of the above
Key Principals Of Modern Application Architecture
Small
Developer-
oriented
Networked
Key Principals Of Modern Application Architecture
Small
Developer-
oriented
Networked
Apps are decomposed into small pieces to:
- Lighten the cognitive load on developers
- Make testing faster and easier
- Speed delivery of application changes
Key Principals Of Modern Application Architecture
Small
Developer-
oriented
Networked
The development and delivery system are
oriented to the developer:
- Development environment is easier to work in
- Architecture and code are easier to understand
- DevOps creates easy-to-use tooling for delivery
Key Principals Of Modern Application Architecture
Small
Developer-
oriented
Networked
Inter-application communication happens over
the network rather than in memory to:
- Accommodate distributed teams
- Make deployment easier
- Make applications richer and more resilient
What Is A Container?
• Holds Things
• Is Portable
• Has clear interfaces
for access
• Can be obtained
from a remote
location
What Is A Container?
A container consists of an
entire runtime environment
(e.g., a tiny Linux OS), an
application, plus the
following:
• Dependencies
• Libraries
• Other binaries
• Configuration files needed
to run it
Docker Images And Layers
• Dockerfile defines how
image includes system
libraries, tools, and other
files and dependencies
• Image starts with a base
platform (OS) image
• Docker creates read-only
intermediate layers based on
Dockerfile statements
• Container (writable
application layer) contains
newly written files,
modifications to existing
files, and newly deleted files
What's The
Difference
Between
Containers And
Virtualization?
What Other
Benefits Do
Containers
Offer?
Docker For Windows
• What Is Docker?
• Overview Of Docker Setup
• Developer Firewall Proxy
Mitigation
• How to Dockerize an
ASP.NET Core Web
Application
• How To Create A Container
• How To Run A Container
Building Cloud Native Applications Using Azure Kubernetes Service
Azure Container Registry
• What Is A Container
Registry?
• Overview Of Azure
Container Registry
• How To Build An Image For
A Container Registry
• How To Push An Image To
A Container Registry
Kubernetes
• What Is Kubernetes?
• What Are Its features?
Origin Of Kubernetes
In ancient Greek, "kubernan"
means to steer
Whereas "kubernetes" means
helmsman
Barebones
AKS
• Azure Kubernetes Service With Basic Networking
• 1 – 3 Nodes (Virtual Machines)
Kubernetes Objects
• What Is A Pod?
• What Is A Service?
• What Is A Namespace?
• What Is A ReplicaSet?
• What Is A Deployment?
• What Is A Node?
• What Is An Ingress?
What Is A
Pod?
A Pod is the basic
building block of
Kubernetes–the
smallest and simplest
unit in the Kubernetes
object model that you
create or deploy.
A Pod represents a
running process (app)
on your cluster.
What Is A
Service?
A service is an
abstraction which
defines a logical set of
pods and a policy by
which to access them -
sometimes called a
micro-service.
What Is A
Namespace?
A namespace provides
a way to support
multiple virtual
clusters backed by the
same physical cluster.
They are a way to
divide cluster
resources between
multiple users.
What Is A
ReplicaSet?
A ReplicaSet’s purpose
is to maintain a stable
set of replica Pods
running at any given
time.
As such, it is often
used to guarantee the
availability of a
specified number of
identical Pods.
What Is A
Deployment?
A Deployment
controller provides
declarative updates for
Pods and ReplicaSets.
You describe a desired
state in a Deployment
object, and the
Deployment controller
manages the changes
for you.
What Is A
Node?
A node is a worker
machine in
Kubernetes, previously
known as a minion.
A node may be a VM
or physical machine,
depending on the
cluster.
What Is
Ingress?
An API object that
manages external
access to the services
in a cluster, typically
HTTP.
Ingress can provide
load balancing, SSL
termination and name-
based virtual hosting.
Building Cloud Native Applications Using Azure Kubernetes Service
Azure Kubernetes Service
• How Do Containers Get
Deployed?
• How Does Autoscaling
Work?
• What Happens When A
Container Dies?
• How Do I Manage An
Application In A Container?
• Where Can I See
Application Performance
Data?
• Where Can I See
Application Security Access
Logs?
Azure
Resource
Group
Virtual
Network
Private and
Public
Subnets
Private and
Public
Network
Security
Groups
Azure
Kubernetes
Service
(AKS)
NGINX
Ingress
Controller,
Load
Balancer
Azure
Application
Gateway
with WAF,
Public IP
IBM
API Connect
Endpoint
AKS With
Service
Integrations
Azure DevOps Pipeline
How to Manage Access, Security, and
Monitoring in AKS
• For improved security and management, AKS lets you integrate with
Azure Active Directory and use Kubernetes role-based access
controls.
• You can also monitor the health of your cluster and resources.
• Azure Monitor for containers overview
• Enable and review Kubernetes master node logs in Azure Kubernetes
Service (AKS)
What Else Should I Do To Secure My Cloud
Applications?
• Security concepts for applications and clusters in Azure Kubernetes
Service (AKS)
• Cluster operator and developer best practices to build and manage
applications on Azure Kubernetes Service (AKS)
• Use a third party SaaS solution, such as Twistlock, to implement
cloud-native security for containers
OK, So How Do I Get Started With AKS?
• Tutorial: Deploy ASP.NET Core apps to Azure Kubernetes Service with
Azure DevOps Projects
• Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using
the Azure portal
• Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using
the Azure CLI
• Microsoft Azure Developer: Deploying and Managing Containers
• Azure CLI: Getting Started
And then practice, practice, practice.

More Related Content

PPTX
EasyStack True Private Cloud | Quek Keng Oei
PPTX
Zero-downtime deployment with Kubernetes [Meetup #21 - 01]
PDF
Deploying NGINX in Cloud Native Kubernetes
PPTX
Distributed Storage in the Cloud
PDF
Monitoring Your AWS EKS Environment with Datadog
PDF
Journey from on prem to the cloud with kubernetes
PPTX
PKS - Solving Complexity for Modern Data Workloads
PDF
Migrating to Cloud Native Solutions
EasyStack True Private Cloud | Quek Keng Oei
Zero-downtime deployment with Kubernetes [Meetup #21 - 01]
Deploying NGINX in Cloud Native Kubernetes
Distributed Storage in the Cloud
Monitoring Your AWS EKS Environment with Datadog
Journey from on prem to the cloud with kubernetes
PKS - Solving Complexity for Modern Data Workloads
Migrating to Cloud Native Solutions

What's hot (20)

PPTX
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...
PPTX
Tectonic Summit 2016: Multitenant Data Architectures with Kubernetes
PPTX
CDK - The next big thing - Quang Phuong
PDF
Managing add-ons across clusters
PDF
Securing Kubernetes Workloads
PPTX
'Cloud-Native' Ecosystem - Aug 2015
PPTX
IPaaS 2.0: Fuse Integration Services (Robert Davies & Keith Babo)
PDF
Manage thousands of k8s applications with minimal efforts using kube carrier
PPTX
DevOps Moves To Production (Lori MacVittie)
PDF
Argo Workflows 3.0, a detailed look at what’s new from the Argo Team
PDF
Continuous Delivery on Kubernetes Using Spinnaker
PPTX
Orchestrating stateful applications with PKS and Portworx
PDF
Data protection in a kubernetes-native world
PDF
Kubestr browse2021.pptx
PDF
Containers & Cloud Native Ops Cloud Foundry Approach
PDF
Kubernetes 1.21 release
PPTX
Tectonic Summit 2016: Betting on Kubernetes
PDF
Putting The 'M' In MBaaS—Red Hat Mobile Client Development Platform (Jay Balu...
PPTX
Why cloud native matters
PDF
Architecting for Continuous Delivery
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...
Tectonic Summit 2016: Multitenant Data Architectures with Kubernetes
CDK - The next big thing - Quang Phuong
Managing add-ons across clusters
Securing Kubernetes Workloads
'Cloud-Native' Ecosystem - Aug 2015
IPaaS 2.0: Fuse Integration Services (Robert Davies & Keith Babo)
Manage thousands of k8s applications with minimal efforts using kube carrier
DevOps Moves To Production (Lori MacVittie)
Argo Workflows 3.0, a detailed look at what’s new from the Argo Team
Continuous Delivery on Kubernetes Using Spinnaker
Orchestrating stateful applications with PKS and Portworx
Data protection in a kubernetes-native world
Kubestr browse2021.pptx
Containers & Cloud Native Ops Cloud Foundry Approach
Kubernetes 1.21 release
Tectonic Summit 2016: Betting on Kubernetes
Putting The 'M' In MBaaS—Red Hat Mobile Client Development Platform (Jay Balu...
Why cloud native matters
Architecting for Continuous Delivery
Ad

Similar to Building Cloud Native Applications Using Azure Kubernetes Service (20)

PPTX
Moving Applications into Azure Kubernetes
PPTX
Three ways of deploying ml model on Azure
PPTX
Azure deployment of ML Models in three ways
PPTX
DevOps with Azure, Kubernetes, and Helm Webinar
PPTX
PDF
Azure Kubernetes Service 2019 ふりかえり
PPTX
KubernetSADASDASDASDSADASDASDASDASDes.pptx
PPTX
Making sense of containers, docker and Kubernetes on Azure.
PPTX
Cloud technology with practical knowledge
PPTX
Docker and Azure Kubernetes service.pptx
PPTX
Containerization with Azure
PDF
Denodo in the Age of Containers: How to Simplify Operation of your Virtual Layer
PPTX
Docker y azure container service
PPTX
Containerization in microsoft azure
PPTX
Containerization with Microsoft Azure
PPTX
Micro services
PPTX
A practical approach to provisioning resources in azure
PDF
Navigating in the sea of containers in azure when to choose which service and...
PDF
56k.cloud training
PPTX
ArchitectNow - Designing Cloud-Native apps in Microsoft Azure
Moving Applications into Azure Kubernetes
Three ways of deploying ml model on Azure
Azure deployment of ML Models in three ways
DevOps with Azure, Kubernetes, and Helm Webinar
Azure Kubernetes Service 2019 ふりかえり
KubernetSADASDASDASDSADASDASDASDASDes.pptx
Making sense of containers, docker and Kubernetes on Azure.
Cloud technology with practical knowledge
Docker and Azure Kubernetes service.pptx
Containerization with Azure
Denodo in the Age of Containers: How to Simplify Operation of your Virtual Layer
Docker y azure container service
Containerization in microsoft azure
Containerization with Microsoft Azure
Micro services
A practical approach to provisioning resources in azure
Navigating in the sea of containers in azure when to choose which service and...
56k.cloud training
ArchitectNow - Designing Cloud-Native apps in Microsoft Azure
Ad

Recently uploaded (20)

PDF
Canva Desktop App With Crack Free Download 2025?
PPTX
Greedy best-first search algorithm always selects the path which appears best...
PPTX
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
PPTX
Advanced Heap Dump Analysis Techniques Webinar Deck
PDF
KidsTale AI Review - Create Magical Kids’ Story Videos in 2 Minutes.pdf
PDF
OpenColorIO Virtual Town Hall - August 2025
PDF
IObit Driver Booster Pro Crack Latest Version Download
PPTX
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
PDF
Multiverse AI Review 2025_ The Ultimate All-in-One AI Platform.pdf
PPTX
MCP empowers AI Agents from Zero to Production
PPT
ch03 data adnd signals- data communications and networks ppt
PPTX
Beige and Black Minimalist Project Deck Presentation (1).pptx
PPT
chapter01_java_programming_object_oriented
PPTX
opentower introduction and the digital twin
PDF
Module 1 - Introduction to Generative AI.pdf
PDF
DOWNLOAD—IOBit Uninstaller Pro Crack Download Free
PDF
10 Mistakes Agile Project Managers Still Make
PPTX
Hexagone difital twin solution in the desgining
PDF
How to Set Realistic Project Milestones and Deadlines
PPTX
Presentation - Summer Internship at Samatrix.io_template_2.pptx
Canva Desktop App With Crack Free Download 2025?
Greedy best-first search algorithm always selects the path which appears best...
Independent Consultants’ Biggest Challenges in ERP Projects – and How Apagen ...
Advanced Heap Dump Analysis Techniques Webinar Deck
KidsTale AI Review - Create Magical Kids’ Story Videos in 2 Minutes.pdf
OpenColorIO Virtual Town Hall - August 2025
IObit Driver Booster Pro Crack Latest Version Download
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
Multiverse AI Review 2025_ The Ultimate All-in-One AI Platform.pdf
MCP empowers AI Agents from Zero to Production
ch03 data adnd signals- data communications and networks ppt
Beige and Black Minimalist Project Deck Presentation (1).pptx
chapter01_java_programming_object_oriented
opentower introduction and the digital twin
Module 1 - Introduction to Generative AI.pdf
DOWNLOAD—IOBit Uninstaller Pro Crack Download Free
10 Mistakes Agile Project Managers Still Make
Hexagone difital twin solution in the desgining
How to Set Realistic Project Milestones and Deadlines
Presentation - Summer Internship at Samatrix.io_template_2.pptx

Building Cloud Native Applications Using Azure Kubernetes Service

  • 2. Agenda • Cloud Native Application Development • Containers • Docker • Container Registries • Azure Container Registry (ACR) • Kubernetes • Azure Kubernetes Service (AKS) • Azure DevOps
  • 3. What Is Cloud Native Development? Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as: • Public, private, and hybrid clouds • Containers • Service meshes • Microservices • Serverless apps • Immutable infrastructure • Declarative APIs
  • 4. What Is A 12-Factor Application? • Codebase • Dependencies • Config • Backing Services • Build, Release, Run • Processes • Port Binding • Concurrency • Disposability • Dev/Prod Parity • Logs • Admin Processes 12-Factor Applications
  • 5. What Is A Modern Application? • Supports multiple client types • Provides an API for accessing data through services • Provides data in a generic, consumable format, such as JSON • Built on top of a modern stack that directly supports this type of application • Typically uses a microservices-based application approach • However, some may prefer to use serverless applications • Can use combinations of all of the above
  • 6. Key Principals Of Modern Application Architecture Small Developer- oriented Networked
  • 7. Key Principals Of Modern Application Architecture Small Developer- oriented Networked Apps are decomposed into small pieces to: - Lighten the cognitive load on developers - Make testing faster and easier - Speed delivery of application changes
  • 8. Key Principals Of Modern Application Architecture Small Developer- oriented Networked The development and delivery system are oriented to the developer: - Development environment is easier to work in - Architecture and code are easier to understand - DevOps creates easy-to-use tooling for delivery
  • 9. Key Principals Of Modern Application Architecture Small Developer- oriented Networked Inter-application communication happens over the network rather than in memory to: - Accommodate distributed teams - Make deployment easier - Make applications richer and more resilient
  • 10. What Is A Container? • Holds Things • Is Portable • Has clear interfaces for access • Can be obtained from a remote location
  • 11. What Is A Container? A container consists of an entire runtime environment (e.g., a tiny Linux OS), an application, plus the following: • Dependencies • Libraries • Other binaries • Configuration files needed to run it
  • 12. Docker Images And Layers • Dockerfile defines how image includes system libraries, tools, and other files and dependencies • Image starts with a base platform (OS) image • Docker creates read-only intermediate layers based on Dockerfile statements • Container (writable application layer) contains newly written files, modifications to existing files, and newly deleted files
  • 15. Docker For Windows • What Is Docker? • Overview Of Docker Setup • Developer Firewall Proxy Mitigation • How to Dockerize an ASP.NET Core Web Application • How To Create A Container • How To Run A Container
  • 17. Azure Container Registry • What Is A Container Registry? • Overview Of Azure Container Registry • How To Build An Image For A Container Registry • How To Push An Image To A Container Registry
  • 18. Kubernetes • What Is Kubernetes? • What Are Its features?
  • 19. Origin Of Kubernetes In ancient Greek, "kubernan" means to steer Whereas "kubernetes" means helmsman
  • 20. Barebones AKS • Azure Kubernetes Service With Basic Networking • 1 – 3 Nodes (Virtual Machines)
  • 21. Kubernetes Objects • What Is A Pod? • What Is A Service? • What Is A Namespace? • What Is A ReplicaSet? • What Is A Deployment? • What Is A Node? • What Is An Ingress?
  • 22. What Is A Pod? A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process (app) on your cluster.
  • 23. What Is A Service? A service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service.
  • 24. What Is A Namespace? A namespace provides a way to support multiple virtual clusters backed by the same physical cluster. They are a way to divide cluster resources between multiple users.
  • 25. What Is A ReplicaSet? A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
  • 26. What Is A Deployment? A Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller manages the changes for you.
  • 27. What Is A Node? A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster.
  • 28. What Is Ingress? An API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name- based virtual hosting.
  • 30. Azure Kubernetes Service • How Do Containers Get Deployed? • How Does Autoscaling Work? • What Happens When A Container Dies? • How Do I Manage An Application In A Container? • Where Can I See Application Performance Data? • Where Can I See Application Security Access Logs?
  • 41. How to Manage Access, Security, and Monitoring in AKS • For improved security and management, AKS lets you integrate with Azure Active Directory and use Kubernetes role-based access controls. • You can also monitor the health of your cluster and resources. • Azure Monitor for containers overview • Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)
  • 42. What Else Should I Do To Secure My Cloud Applications? • Security concepts for applications and clusters in Azure Kubernetes Service (AKS) • Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS) • Use a third party SaaS solution, such as Twistlock, to implement cloud-native security for containers
  • 43. OK, So How Do I Get Started With AKS? • Tutorial: Deploy ASP.NET Core apps to Azure Kubernetes Service with Azure DevOps Projects • Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal • Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI • Microsoft Azure Developer: Deploying and Managing Containers • Azure CLI: Getting Started And then practice, practice, practice.

Editor's Notes

  • #4: The following information is from the Cloud Native Computing Foundation (https://www.cncf.io/) Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as: Public, private, and hybrid clouds Containers Service meshes Microservices Immutable infrastructure Declarative APIs These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil. Cloud Native Computing Foundation
  • #5: The following is from “The Twelve-Factor App” (https://12factor.net/) Introduction In the modern era, software is commonly delivered as a service: called web apps, or software-as-a-service. The twelve-factor app is a methodology for building software-as-a-service apps that: Use declarative formats for setup automation, to minimize time and cost for new developers joining the project; Have a clean contract with the underlying operating system, offering maximum portability between execution environments; Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration; Minimize divergence between development and production, enabling continuous deployment for maximum agility; And can scale up without significant changes to tooling, architecture, or development practices. The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc). The Twelve Factors I. Codebase - One codebase tracked in revision control, many deploys II. Dependencies - Explicitly declare and isolate dependencies III. Config - Store config in the environment IV. Backing services - Treat backing services as attached resources V. Build, release, run - Strictly separate build and run stages VI. Processes - Execute the app as one or more stateless processes VII. Port binding - Export services via port binding VIII. Concurrency - Scale out via the process model IX. Disposability - Maximize robustness with fast startup and graceful shutdown X. Dev/prod parity - Keep development, staging, and production as similar as possible XI. Logs - Treat logs as event streams XII. Admin processes Run admin/management tasks as one-off processes
  • #6: The following few slides contain information is from the blog post titled “Principles of Modern Application Development” located at https://www.nginx.com/blog/principles-of-modern-application-development/ What Is a Modern App? A modern application is one that supports multiple clients – whether the client is a UI using the Angular library, a mobile app running on Android or iOS, or a downstream application that connects to the application through an API. Modern applications expect to have an undefined number of clients consuming the data and services it provides. A modern application provides an API for accessing that data and those services. The API is consistent, rather than bespoke to different clients accessing the application. The API is available over HTTP(S) and provides access to all the features and functionality available through the GUI or CLI. Data is available in a generic, consumable format, such as JSON. APIs represent the objects and services in a clear, organized manner – RESTful APIs or GraphQL do a good job of providing the appropriate kind of interface. Modern applications are built on top of a modern stack, and the modern stack is one that directly supports this type of application – the stack helps the developer easily create an app with an HTTP interface and clear API endpoints. It enables the app to easily consume and emit JSON data. In other words, it conforms to the relevant elements of the Twelve‑Factor App for Microservices. Keep in mind that we are not advocating a strictly microservices‑based application approach. Many of you are working with monoliths that need to evolve, while others have SOA applications that are being extended and evolved to be microservices applications. Still others are moving toward serverless applications, and some of you are implementing a combination of all of the above. The principles outlined in this discussion can be applied to each of these systems with some minor tweaks.
  • #7: The Principles Now that we have a shared understanding of the modern application and the modern stack, let’s dive into the architectural and developmental principles that will assist you in designing, implementing, and maintaining a modern application. One of the core principles of modern development is keep it small, or just small for short. We have applications that are incredibly complex with many, many moving parts. Building the application out of small, discrete components makes the overall application easier to design, maintain, and manage. (Notice we’re saying “easier”, not “easy”.) The second principle is that we can maximize developer productivity by helping them focus on the features they are developing and freeing them from concerns about infrastructure and CI/CD during implementation. So, our approach is developer‑oriented. Finally, everything about your application should be networked. As networks have gotten faster, and applications more complex, over the past 20 years, we’ve been moving toward a networked future. As discussed earlier, the modern application is used in a network context by multiple different clients. Applying a networking mindset throughout the architecture has significant benefits that mesh well with small and developer‑oriented. If you keep the principles of small, developer‑oriented, and networked in mind as you design and implement your application, you will have a leg up in evolving and delivering your application.
  • #8: Principle 1: Small The human brain has difficulty trying to consume too much information. In psychology, cognitive load refers to the total amount of mental effort being used to retain information in working memory. Reducing the cognitive load on developers is beneficial because it means that they can focus their energy on solving the problem at hand, instead of maintaining a complex model of the entire application, and its future features, in their minds as they solve specific problems. Three ways to reduce cognitive load on your development team are: Reduce the timeframe that they must consider in building a new feature – the shorter the timeframe, the lower the cognitive load Reduce the size of the code that is being worked on – less code means a lower cognitive load Simplify the process for making incremental changes to the application – the simpler the process, the lower the cognitive load
  • #9: Principle 2: Developer-Oriented The biggest bottleneck to rapid development is often not the architecture or your development process, but how much time your engineers spend focusing on the business logic of the feature they are working on. Byzantine and inscrutable code bases, excessive tooling/harnessing, and common, social distractions are all productivity killers for your engineering team. You can make the development process more developer‑oriented – that is, you can free developers from distractions, making it easier for them to focus on the task at hand. To get the best work out of your team, it is critical that your application ecosystem focuses on the following: Systems and processes that are easy to work with Architecture and code that are easy to understand DevOps support for managing tooling
  • #10: Principle 3: Networked Application design has been shifting over time. It used to be that applications were used and run on the systems that hosted them. Mainframe/minicomputer applications, desktop applications, and even Unix CLI applications ran in a local context. Connecting to these systems via a network interface gradually became a feature, but was often thought of as a necessary evil, and was generally considered to be “slower”. The benefits of a networked architecture: Makes your application more resilient Easier to deploy Easier to manage More Resilient By incorporating networking deeply in your architecture, you make it more resilient, especially if you design using the principles described in the Twelve‑Factor App for Microservices. By implementing twelve‑factor principles in your application components, you get an application that can easily scale horizontally and that is easy to distribute your request load against. This means that you can easily have multiple instances of all of your application components running simultaneously, without fear that the failure of one of them might cause an outage of the entire application. Using a load balancer like NGINX, you can monitor your services, and make sure that requests go to healthy instances. You can also easily scale up the application based on the bottlenecks in the system that are actually being taxed: you don’t have to scale up all of the application components at the same time, as you would with a monolithic application. Easier to Deploy By networking your application, you also make deployment simpler. The testing regime for a single service is significantly smaller (or simpler) than for an entire monolithic application. Service testing is much more like unit or functional testing than the full regression‑testing process required by a monolith. If you are embracing microservices, it means that your application code is packaged in an immutable container that is built once (by your trusted DevOps team), that moves through the CI/CD pipeline without modification, and that runs in production as built. Easier to Manage Networking your application also makes management easier. Compared to a single monolith which can fail or need scaling in a variety of ways, a networked, microservices‑oriented application is easier to manage. The component parts are now discrete and can be monitored more easily. The intercommunication between the parts is conducted via HTTP, making it easy to monitor, utilize, and test.
  • #11: The following metaphors are from this blog post: https://towardsdatascience.com/learn-enough-docker-to-be-useful-b7ba70caeb4b Container Like a physical plastic container, a Docker container: Holds things Something is either inside the container or outside the container. Is portable It can be used on your local machine, your coworker’s machine, or a cloud provider’s servers (e.g. Azure). Sort of like all that stuff you keep moving with you from home to home. Has clear interfaces for access Our physical container has a lid for opening and putting things in and taking things out. Similarly, a Docker container has several mechanisms for interfacing with the outside world. It has ports that can be opened for interacting through the browser. You can configure it to interact with data through the command line. Can be obtained from a remote location You can get another empty plastic container from Amazon.com when you need it. Amazon gets its plastic containers from manufacturers who stamp them out by the thousands from a single mold. In the case of a Docker container, an offsite registry keeps an image, which is like a mold, for your container. Then when you need a container you can make one from the image.
  • #13: Docker Image A Docker image is a file, comprised of multiple layers, used to execute code in a Docker container. An image is essentially built from the instructions for a complete and executable version of an application, which relies on the host OS kernel. When the Docker user runs an image, it becomes one or multiple instances of that container. Docker images and layers A Docker image is made up of multiple layers. A user composes each Docker image to include system libraries, tools, and other files and dependencies for the executable code. Image developers can reuse static image layers for different projects. Reuse saves time, because a user does not have to create everything in an image. Most Docker images start with a base image, although a user can build one entirely from scratch, if desired. Each image has one readable/writable top layer over static layers. Layers are added to the base image to tailor the code to run in a container. Each layer of a Docker image is viewable under /var/lib/docker/aufs/diff, or via the Docker history command in the command line interface (CLI). By default, Docker shows all top-layer images, such as the repository, tags and file sizes. Intermediate layers are cached, which makes top layers easier to view. Docker utilizes storage drivers to manage contents of image layers. When a new container is created from an image, a writable layer is also created. This layer is called the container layer, and it hosts all changes made to the running container. This layer can store newly written files, modifications to existing files and newly deleted files. The writable layer allows customization of the container. Changes made to the writable layer are saved on that layer. Multiple containers can share the same underlying base image and have their own data state thanks to the writable layer.
  • #14: Virtualization (i.e. Virtual Machines) With virtualization technology, the package that can be passed around is a virtual machine, and it includes an entire operating system as well as the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it. Containers By contrast a server running three containerized applications with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. That means the containers are much more lightweight and use far fewer resources than virtual machines.
  • #15: Size A container may be only tens of megabytes in size, whereas a virtual machine with its own entire operating system may be several gigabytes in size. Because of this, a single server can host far more containers than virtual machines. Startup Speed Another major benefit is that virtual machines may take several minutes to boot up their operating systems and begin running the applications they host, while containerized applications can be started almost instantly. That means containers can be instantiated in a "just in time" fashion when they are needed and can disappear when they are no longer required, freeing up resources on their hosts. Integration (Modularity) A third benefit is that containerization allows for greater modularity. Rather than run an entire complex application inside a single container, the application can be split in to modules. This is the so-called microservices approach. Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without having to rebuild the entire application.
  • #16: What Is Docker? Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. DEMO Show location of Docker installation instructions: Enable system support for virtualization at BIOS level Enable Windows 10 Hyper-V using PowerShell Install Docker for WIndows Show location of Docker training materials Create a simple web app Add a Dockerfile to the web app project Create and edit a release version of the Dockerfile Create a Kubernetes folder in the web app Add az-login.ps1 to the Kubernetes folder Show how to configure firewall For more information, see the following: https://towardsdatascience.com/learn-enough-docker-to-be-useful-b7ba70caeb4b https://towardsdatascience.com/learn-enough-docker-to-be-useful-1c40ea269fa8
  • #18: What is a Container Registry? Container registries are useful A container registry is a repository for storing container images. A container image consists of many files, which encapsulate an application. After a host puts an image into a registry, other hosts can download it from the registry server. This allows the same application to be shipped from a host to another. Who should use a container registry Developers, testers and CI/CD systems need to use a registry to store images created during the application development process. Container images placed in the registry can be used in various phases of the development. How organizations are using a container registry To begin with, users usually use public registry service such as Docker Hub because it is simple and easy to use. However, when they are getting serious in using containers, organizations often wonder whether to continue to use a public registry service or not. For security and efficiency reasons, a private registry should be set up within their organization. Container registries are secure and efficient For security and efficiency purposes, many choose to set up their own instance of private registry within their organizations. Once they do that, the next question is how they can protect their images. Protecting images in container registries By assigning role-based access control (RBAC) to the images using a user identity already established in their organization, such as LDAP and Active Directory. For additional security layers, images should be digitally signed to ensure their authenticity from trusted authors. Furthermore, images should be scanned for vulnerabilities and patches can be applied accordingly. By using a SaaS such as Twistlock, users can achieve goals to secure their images. Cloud-native applications and container registries Cloud-native applications are often built using container technology. Therefore, people running cloud-native applications should use a registry during their application lifecycle. What Is Azure Container Registry (ACR)? Azure Container Registry is a managed Docker registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your private Docker container images. Use container registries in Azure with your existing container development and deployment pipelines. Use Azure Container Registry Build (ACR Build) to build container images in Azure. Build on demand, or fully automate builds with source code commit and base image update build triggers.
  • #19: The following information is from https://kubernetes.io/ What Is Kubernetes? Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Planet Scale Designed on the same principles that allows Google to run billions of containers a week, Kubernetes can scale without increasing your ops team. Never Outgrow Whether testing locally or running a global enterprise, Kubernetes flexibility grows with you to deliver your applications consistently and easily no matter how complex your need is. Run Anywhere Kubernetes is open source giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, letting you effortlessly move workloads to where it matters to you. *** Kubernetes Features *** Service discovery and load balancing No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. Automatic binpacking Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. Storage orchestration Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. Self-healing Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve. Automated rollouts and rollbacks Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. Secret and configuration management Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
  • #22: The following information is from https://kubernetes.io/
  • #23: What Is A Pod? A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster. A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well. Pods in a Kubernetes cluster can be used in two main ways: Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly. Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity. Pods and Controllers A Controller can create and manage multiple Pods for you, handling replication and rollout and providing self-healing capabilities at cluster scope. For example, if a Node fails, the Controller might automatically replace the Pod by scheduling an identical replacement on a different Node.
  • #24: What Is A Service? A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label selector. Why Problem Do Services Solve? Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicaSets in particular create and destroy Pods dynamically (e.g. when scaling out or in). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set? As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling. For Kubernetes-native applications, Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods.
  • #25: What Is A Namespace? Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces are a way to divide cluster resources between multiple users (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default. It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.
  • #26: What Is A ReplicaSet? A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template. When to use a ReplicaSet A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all. This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.
  • #27: What Is A Deployment? A Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. Use Cases The following are typical use cases for Deployments: Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not. Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment. Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment. Scale up the Deployment to facilitate more load. Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout. Use the status of the Deployment as an indicator that a rollout has stuck. Clean up older ReplicaSets that you don’t need anymore.
  • #28: What Is A Node? A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy.
  • #29: What Is Ingress? An API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name-based virtual hosting. What Is An Ingress Controller? An ingress controller exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. internet | [ Ingress ] --|-----|-- [ Services ] An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. An Ingress does not expose arbitrary ports or protocols.
  • #31: What Is Azure Kubernetes Service (AKS)? Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you.