For the apiVersion: key in manifests, print the supported API versions on the server:
1
kubectl api-versions
KYAML aims to be a safer and less ambiguous YAML subset compatible with existing tooling (alpha available since kubernetes 1.34).
1
2
3
source <(kubectl completion bash)
alias k=kubectl
complete -o default -F __start_kubectl k
Convert manifests between different API versions.
Namespaces help different projects, teams, or customers to share a Kubernetes cluster by providing:
1
2
3
4
5
6
7
8
kubectl config set-context dev --namespace=development \
--cluster=super_kubernetes \
--user=super_kubernetes
kubectl config set-context prod --namespace=production \
--cluster=super_kubernetes \
--user=super_kubernetes
kubectl config view
namespace-dev.yaml
1
2
3
4
5
6
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
name: development
namespace-prod.yaml
1
2
3
4
5
6
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
1
2
kubectl config use-context dev
kubectl config current-context
Show help for a subcommand:
1
kubectl config --help
Apply manifest:
1
kubectl apply -f namespace-dev.yaml
kubectx - switch contexts (clusters) faster kubens - switch namespaces faster
Labels are key-value pairs that help filtering (API). They are intended to be used to specify identifying attributes relevant to users.
Annotations are key-value pairs that have no validation and are meant for notes, nonidentifying metadata.
Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources that are marked for deletion. Finalizers can get in the way of deleting resources in Kubernetes, especially when there are parent-child relationships between objects. When you create a resource using a manifest file, you can specify finalizers in the
1
metadata.finalizers
field. When you attempt to delete the resource, the API server handling the delete request notices the values and does the following:
The relevant controller sees the deletionTimestamp and attempts to satisfy the requirement. On success the controller removes that key from the finalizers field. When the finalizers field is emptied, an object with a deletionTimestamp field set is automatically deleted.
1
2
3
4
kind create cluster # blocks until the control plane reaches a ready status
cat ~/.kube/config
kubectx
kind delete cluster
1
minikube start
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
1
2
3
4
5
minikube pause # pause minikube without affecting deployed applications
minikube stop
minikube dashboard # opens the local url in your browser when ready
minikube addons list
kubectl cluster-info dump | less
Aims to minimize vulnerabilities by providing container images. Latest versions are free.
Evaluation of policies on your Kubernetes files, manifest scanning.
Deployable as a single binary that can be configured to perform different roles within the Jaeger architecture.
Run a bunch of networking concerns/features inside a sidecar that each service can use. Instead of implementing them directly in each service.
The usefulness of this scales with the amount of services you have. If you have just 2 services talking to each other, implement the features you care about in the services directly. Decision to use service meshes needs to be made at the beginning of the project, is the extra complexity worth the benefits? Consider capacity planning (increased resource usage), networking design, qualified people etc.
cilium istio - bookinfo application sample
Some of the things service meshes typically do out of the box (not all of them are equal):
Service meshes can also do traffic shifting and steering, retries with back-offs, canary, and A/B testing using a rather simple approach. Instead of implementing these things in the application code (especially retries) or using some pieces of your infrastructure to perform canary or traffic shifting, you can define your policies in a YAML file and send it to the MESH control plane which will program the proxies running alongside your workloads. Very important other members of the project are familiar with mesh capabilities!
Service Discovery - a tool like Istio uses Kubernetes Service for Service Discovery. Since each pod has a little proxy running alongside it (called sidecar), each pod in the Mesh is aware of all the other pods IP’s. When two pods talk to each other, your application will use the Service you created to find the server it wants to reach, but traffic going out of your application container is intercepted by the proxy sidecar, policies are applied to it and traffic is sent directly to the receiving end using the pod IP, there is no Load Balancing.
Ambient service mesh, without a sidecar. Since istio doesn't use eBPF it could be useful to integrate it with cilium.