Quickstart for Calico Enterprise on Kubernetes
Big picture
This quickstart gets you a single-host Kubernetes cluster with Calico Enterprise in approximately 15 minutes.
Value
Use this quickstart to quickly and easily try Calico Enterprise features. To deploy a cluster suitable for production, refer to Calico Enterprise on Kubernetes.
Concepts
Operator based installation
This quickstart guide uses the Tigera operator to install Calico Enterprise. The operator provides lifecycle management for Calico Enterprise exposed via the Kubernetes API defined as a custom resource definition.
Before you begin
Required
A Linux host that meets the following requirements.
- x86-64
- 2CPU
- 12GB RAM
- 50GB free disk space
- Ubuntu Server 18.04
- Internet access
- Sufficient virtual memory
How to
The geeky details of what you get:
Kubernetes network policies are implemented by network plugins rather than Kubernetes itself. Simply creating a network policy resource without a network plugin to implement it, will have no effect on network traffic.
The Calico Enterprise plugin implements the full set of Kubernetes network policy features. In addition, Calico Enterprise supports Calico Enterprise network policies, providing additional features and capabilities beyond Kubernetes network policies. Kubernetes and Calico Enterprise network policies work together seamlessly, so you can choose whichever is right for you, and mix and match as desired.
How Kubernetes assigns IP address to pods is determined by the IPAM (IP Address Management) plugin being used.
The Calico Enterprise IPAM plugin dynamically allocates small blocks of IP addresses to nodes as required, to give efficient overall use of the available IP address space. In addition, Calico Enterprise IPAM supports advanced features such as multiple IP pools, the ability to specify a specific IP address range that a namespace or pod should use, or even the specific IP address a pod should use.
The CNI (Container Network Interface) plugin being used by Kubernetes determines the details of exactly how pods are connected to the underlying network.
The Calico Enterprise CNI plugin connects pods to the host networking using L3 routing, without the need for an L2 bridge. This is simple and easy to understand, and more efficient than other common alternatives such as kubenet or flannel.
An overlay network allows pods to communicate between nodes without the underlying network being aware of the pods or pod IP addresses.
Packets between pods on different nodes are encapsulated using IPIP, wrapping each original packet in an outer packet that uses node IPs, and hiding the pod IPs of the inner packet. This can be done very efficiently by the Linux kernel, but it still represents a small overhead, which you might want to avoid if running particularly network intensive workloads.
For completeness, in contrast, operating without using an overlay provides the highest performance network. The packets that leave your pods are the packets that go on the wire.
BGP (Border Gateway Protocol) is used to dynamically program routes for pod traffic between nodes.
BGP is a standards-based routing protocol used to build the internet. It scales exceptionally well, and even the largest Kubernetes clusters represent a tiny amount of load compared to what BGP can cope with.
Calico Enterprise can run BGP in three modes:
- Full mesh - where each node talks BGP to each other, easily scaling to 100 nodes, on top of an underlying L2 network or using IPIP overlay
- With route reflectors - where each node talks to one or more BGP route reflectors, scaling beyond 100 nodes, on top of an underlying L2 network or using IPIP overlay
- Peered with TOR (Top of Rack) routers - in a physical data center where each node talks to routers in the top of the corresponding rack, scaling to the limits of your physical data center.
Calico Enterprise stores the operational and configuration state of your cluster in a central datastore. If the datastore is unavailable, your Calico Enterprise network continues operating, but cannot be updated (no new pods can be networked, no policy changes can be applied, etc.).
Calico Enterprise has two datastore drivers you can choose from
- etcd - for direct connection to an etcd cluster
- Kubernetes - for connection to a Kubernetes API server
The advantages of using Kubernetes as the datastore are:
- It doesn’t require an extra datastore, so is simpler to install and manage
- You can use Kubernetes RBAC to control access to Calico Enterprise resources
- You can use Kubernetes audit logging to generate audit logs of changes to Calico Enterprise resources
For completeness, the advantages of using etcd as the datastore are:
- Allows you to run Calico Enterprise on non-Kubernetes platforms (e.g. OpenStack)
- Allows separation of concerns between Kubernetes and Calico Enterprise resources, for example allowing you to scale the datastores independently
- Allows you to run a Calico Enterprise cluster that contains more than just a single Kubernetes cluster, for example, bare metal servers with Calico Enterprise host protection interworking with a Kubernetes cluster or multiple Kubernetes clusters.
Calico Enterprise’s flexible modular architecture supports a wide range of deployment options, so you can select the best networking and network policy options for your specific environment. This includes the ability to run with a variety of CNI and IPAM plugins, and underlying networking options.
The Calico Enterprise Getting Started guides default to the options most commonly used in each environment, so you don’t have to dive into the details unless you want to.
You can click on any deployment option to learn more.
- Install Kubernetes
- Install Calico Enterprise
- Install the Calico Enterprise license
- Log in to Calico Enterprise Manager
- Secure Calico Enterprise with network policy
Install Kubernetes
-
Follow the Kubernetes instructions to install kubeadm
Note: After installing kubeadm, do not power down or restart the host. Instead, continue directly to the next step.
-
As a regular user with sudo privileges, open a terminal on the host that you installed kubeadm on.
-
Initialize the master using the following command.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 \ --apiserver-cert-extra-sans=127.0.0.1
Note: If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 192.168.0.0/16 in the above command.
-
Execute the following commands to configure kubectl (also returned by
kubeadm init
).mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Remove master taint in order to allow kubernetes to schedule pods on the master node.
kubectl taint nodes --all node-role.kubernetes.io/master-
Install Calico Enterprise
-
Install the Tigera operator and custom resource definitions.
kubectl create -f https://docs.tigera.io/v3.11/manifests/tigera-operator.yaml
-
Install the Prometheus operator and related custom resource definitions. The Prometheus operator will be used to deploy Prometheus server and Alertmanager to monitor Calico Enterprise metrics.
Note: If you have an existing Prometheus operator in your cluster that you want to use, skip this step. To work with Calico Enterprise, your Prometheus operator must be v0.40.0 or higher.
kubectl create -f https://docs.tigera.io/v3.11/manifests/tigera-prometheus-operator.yaml
-
Install your pull secret.
kubectl create secret generic tigera-pull-secret \ --from-file=.dockerconfigjson=<path/to/pull/secret> \ --type=kubernetes.io/dockerconfigjson -n tigera-operator
-
Install the Tigera custom resources. For more information on configuration options available in this manifest, see the installation reference.
kubectl create -f https://docs.tigera.io/v3.11/manifests/custom-resources.yaml
You can now monitor progress with the following command:
watch kubectl get tigerastatus
Wait until the
apiserver
shows a status ofAvailable
, then proceed to the next section.
Install the Calico Enterprise license
In order to use Calico Enterprise, you must install the license provided to you by Tigera.
kubectl create -f </path/to/license.yaml>
You can now monitor progress with the following command:
watch kubectl get tigerastatus
When all components show a status of Available
, proceed to the next section.
Log in to Calico Enterprise Manager
-
Create network admin user “Jane”.
kubectl create sa jane -n default kubectl create clusterrolebinding jane-access --clusterrole tigera-network-admin --serviceaccount default:jane
- Extract the login
token
for use with the Calico Enterprise UI.kubectl get secret $(kubectl get serviceaccount jane -o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' | grep token) -o go-template='{{.data.token | base64decode}}' && echo
Copy the above
token
to your clipboard for use in the next step. -
Set up a channel from your local computer to the Calico Enterprise UI.
kubectl port-forward -n tigera-manager svc/tigera-manager 9443
Visit https://localhost:9443/ to log in to the Calico Enterprise UI. Use the
token
from the previous step to authenticate.
Secure Calico Enterprise with network policy
To secure Calico Enterprise component communications, install the following set of network policies.
kubectl create -f https://docs.tigera.io/v3.11/manifests/tigera-policies.yaml
Congratulations! You now have a single-host Kubernetes cluster with Calico Enterprise.
Next steps
- By default, your cluster networking uses IP in IP encapsulation with BGP routing. To review other networking options, see Determine best networking option.
- Get started with Calico Enterprise tiered network policy