Deployments (2024)

A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.

A Deployment provides declarative updates for Pods andReplicaSets.

You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Note:

Do not manage ReplicaSets owned by a Deployment. Consider opening an issue in the main Kubernetes repository if your use case is not covered below.

Use Case

The following are typical use cases for Deployments:

  • Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
  • Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
  • Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
  • Scale up the Deployment to facilitate more load.
  • Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
  • Use the status of the Deployment as an indicator that a rollout has stuck.
  • Clean up older ReplicaSets that you don't need anymore.

Creating a Deployment

The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods:

apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80

In this example:

  • A Deployment named nginx-deployment is created, indicated by the.metadata.name field. This name will become the basis for the ReplicaSetsand Pods which are created later. See Writing a Deployment Specfor more details.

  • The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the .spec.replicas field.

  • The .spec.selector field defines how the created ReplicaSet finds which Pods to manage.In this case, you select a label that is defined in the Pod template (app: nginx).However, more sophisticated selection rules are possible,as long as the Pod template itself satisfies the rule.

    Note:

    The .spec.selector.matchLabels field is a map of {key,value} pairs.A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions,whose key field is "key", the operator is "In", and the values array contains only "value".All of the requirements, from both matchLabels and matchExpressions, must be satisfied in order to match.

  • The template field contains the following sub-fields:

    • The Pods are labeled app: nginxusing the .metadata.labels field.
    • The Pod template's specification, or .template.spec field, indicates thatthe Pods run one container, nginx, which runs the nginxDocker Hub image at version 1.14.2.
    • Create one container and name it nginx using the .spec.template.spec.containers[0].name field.

Before you begin, make sure your Kubernetes cluster is up and running.Follow the steps given below to create the above Deployment:

  1. Create the Deployment by running the following command:

    kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
  2. Run kubectl get deployments to check if the Deployment was created.

    If the Deployment is still being created, the output is similar to the following:

    NAME READY UP-TO-DATE AVAILABLE AGEnginx-deployment 0/3 0 0 1s

    When you inspect the Deployments in your cluster, the following fields are displayed:

    • NAME lists the names of the Deployments in the namespace.
    • READY displays how many replicas of the application are available to your users. It follows the pattern ready/desired.
    • UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state.
    • AVAILABLE displays how many replicas of the application are available to your users.
    • AGE displays the amount of time that the application has been running.

    Notice how the number of desired replicas is 3 according to .spec.replicas field.

  3. To see the Deployment rollout status, run kubectl rollout status deployment/nginx-deployment.

    The output is similar to:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...deployment "nginx-deployment" successfully rolled out
  4. Run the kubectl get deployments again a few seconds later.The output is similar to this:

    NAME READY UP-TO-DATE AVAILABLE AGEnginx-deployment 3/3 3 3 18s

    Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.

  5. To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs. The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-deployment-75675f5897 3 3 3 18s

    ReplicaSet output shows the following fields:

    • NAME lists the names of the ReplicaSets in the namespace.
    • DESIRED displays the desired number of replicas of the application, which you define when you create the Deployment. This is the desired state.
    • CURRENT displays how many replicas are currently running.
    • READY displays how many replicas of the application are available to your users.
    • AGE displays the amount of time that the application has been running.

    Notice that the name of the ReplicaSet is always formatted as[DEPLOYMENT-NAME]-[HASH]. This name will become the basis for the Podswhich are created.

    The HASH string is the same as the pod-template-hash label on the ReplicaSet.

  6. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels.The output is similar to:

    NAME READY STATUS RESTARTS AGE LABELSnginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=75675f5897

    The created ReplicaSet ensures that there are three nginx Pods.

Note:

You must specify an appropriate selector and Pod template labels in a Deployment(in this case, app: nginx).

Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.

Pod-template-hash label

Caution:

Do not change this label.

The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels,and in any existing Pods that the ReplicaSet might have.

Updating a Deployment

Note:

A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template)is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.

Follow the steps given below to update your Deployment:

  1. Let's update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.

    kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

    or use the following command:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

    where deployment/nginx-deployment indicates the Deployment,nginx indicates the Container the update will take place andnginx:1.16.1 indicates the new image and its tag.

    The output is similar to:

    deployment.apps/nginx-deployment image updated

    Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1:

    kubectl edit deployment/nginx-deployment

    The output is similar to:

    deployment.apps/nginx-deployment edited
  2. To see the rollout status, run:

    kubectl rollout status deployment/nginx-deployment

    The output is similar to this:

    Waiting for rollout to finish: 2 out of 3 new replicas have been updated...

    or

    deployment "nginx-deployment" successfully rolled out

Get more details on your updated Deployment:

  • After the rollout succeeds, you can view the Deployment by running kubectl get deployments.The output is similar to this:

    NAME READY UP-TO-DATE AVAILABLE AGEnginx-deployment 3/3 3 3 36s
  • Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling itup to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-deployment-1564180365 3 3 3 6snginx-deployment-2035384211 0 0 0 36s
  • Running get pods should now show only the new Pods:

    kubectl get pods

    The output is similar to this:

    NAME READY STATUS RESTARTS AGEnginx-deployment-1564180365-khku8 1/1 Running 0 14snginx-deployment-1564180365-nacti 1/1 Running 0 14snginx-deployment-1564180365-z9gth 1/1 Running 0 14s

    Next time you want to update these Pods, you only need to update the Deployment's Pod template again.

    Deployment ensures that only a certain number of Pods are down while they are being updated. By default,it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).

    Deployment also ensures that only a certain number of Pods are created above the desired number of Pods.By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).

    For example, if you look at the above Deployment closely, you will see that it first creates a new Pod,then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number ofnew Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed.It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case ofa Deployment with 4 replicas, the number of Pods would be between 3 and 5.

  • Get details of your Deployment:

    kubectl describe deployments

    The output is similar to this:

    Name: nginx-deploymentNamespace: defaultCreationTimestamp: Thu, 30 Nov 2017 10:56:25 +0000Labels: app=nginxAnnotations: deployment.kubernetes.io/revision=2Selector: app=nginxReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 2m deployment-controller Scaled up replica set nginx-deployment-2035384211 to 3 Normal ScalingReplicaSet 24s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 1 Normal ScalingReplicaSet 22s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 2 Normal ScalingReplicaSet 22s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 2 Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0

    Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211)and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet(nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSetto 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times.It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy.Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.

Note:

Kubernetes doesn't count terminating Pods when calculating the number of availableReplicas, which must be betweenreplicas - maxUnavailable and replicas + maxSurge. As a result, you might notice that there are more Pods thanexpected during a rollout, and that the total resources consumed by the Deployment is more than replicas + maxSurgeuntil the terminationGracePeriodSeconds of the terminating Pods expires.

Rollover (aka multiple updates in-flight)

Each time a new Deployment is observed by the Deployment controller, a ReplicaSet is created to bring upthe desired Pods. If the Deployment is updated, the existing ReplicaSet that controls Pods whose labelsmatch .spec.selector but whose template does not match .spec.template are scaled down. Eventually, the newReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0.

If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSetas per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously-- it will add it to its list of old ReplicaSets and start scaling it down.

For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2,but then update the Deployment to create 5 replicas of nginx:1.16.1, when only 3replicas of nginx:1.14.2 had been created. In that case, the Deployment immediately startskilling the 3 nginx:1.14.2 Pods that it had created, and starts creatingnginx:1.16.1 Pods. It does not wait for the 5 replicas of nginx:1.14.2 to be createdbefore changing course.

Label selector updates

It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front.In any case, if you need to perform a label selector update, exercise great caution and make sure you have graspedall of the implications.

Note:

In API version apps/v1, a Deployment's label selector is immutable after it gets created.

  • Selector additions require the Pod template labels in the Deployment spec to be updated with the new label too,otherwise a validation error is returned. This change is a non-overlapping one, meaning that the new selector doesnot select ReplicaSets and Pods created with the old selector, resulting in orphaning all old ReplicaSets andcreating a new ReplicaSet.
  • Selector updates changes the existing value in a selector key -- result in the same behavior as additions.
  • Selector removals removes an existing key from the Deployment selector -- do not require any changes in thePod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that theremoved label still exists in any existing Pods and ReplicaSets.

Rolling Back a Deployment

Sometimes, you may want to rollback a Deployment; for example, when the Deployment is not stable, such as crash looping.By default, all of the Deployment's rollout history is kept in the system so that you can rollback anytime you want(you can change that by modifying revision history limit).

Note:

A Deployment's revision is created when a Deployment's rollout is triggered. This means that thenew revision is created if and only if the Deployment's Pod template (.spec.template) is changed,for example if you update the labels or container images of the template. Other updates, such as scaling the Deployment,do not create a Deployment revision, so that you can facilitate simultaneous manual- or auto-scaling.This means that when you roll back to an earlier revision, only the Deployment's Pod template part isrolled back.

  • Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.161

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
  • The rollout gets stuck. You can verify it by checking the rollout status:

    kubectl rollout status deployment/nginx-deployment

    The output is similar to this:

    Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
  • Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts,read more here.

  • You see that the number of old replicas (adding the replica count fromnginx-deployment-1564180365 and nginx-deployment-2035384211) is 3, and the number ofnew replicas (from nginx-deployment-3066724191) is 1.

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-deployment-1564180365 3 3 3 25snginx-deployment-2035384211 0 0 0 36snginx-deployment-3066724191 1 1 0 6s
  • Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop.

    kubectl get pods

    The output is similar to this:

    NAME READY STATUS RESTARTS AGEnginx-deployment-1564180365-70iae 1/1 Running 0 25snginx-deployment-1564180365-jbqqo 1/1 Running 0 25snginx-deployment-1564180365-hysrc 1/1 Running 0 25snginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s

    Note:

    The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified. Kubernetes by default sets the value to 25%.

  • Get the description of the Deployment:

    kubectl describe deployment

    The output is similar to this:

    Name: nginx-deploymentNamespace: defaultCreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700Labels: app=nginxSelector: app=nginxReplicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.161 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True ReplicaSetUpdatedOldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created)NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created)Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1

    To fix this, you need to rollback to a previous revision of Deployment that is stable.

Checking Rollout History of a Deployment

Follow the steps given below to check the rollout history:

  1. First, check the revisions of this Deployment:

    kubectl rollout history deployment/nginx-deployment

    The output is similar to this:

    deployments "nginx-deployment"REVISION CHANGE-CAUSE1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.13 kubectl set image deployment/nginx-deployment nginx=nginx:1.161

    CHANGE-CAUSE is copied from the Deployment annotation kubernetes.io/change-cause to its revisions upon creation. You can specify theCHANGE-CAUSE message by:

    • Annotating the Deployment with kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"
    • Manually editing the manifest of the resource.
  2. To see the details of each revision, run:

    kubectl rollout history deployment/nginx-deployment --revision=2

    The output is similar to this:

    deployments "nginx-deployment" revision 2 Labels: app=nginx pod-template-hash=1159050644 Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: <none> No volumes.

Rolling Back to a Previous Revision

Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2.

  1. Now you've decided to undo the current rollout and rollback to the previous revision:

    kubectl rollout undo deployment/nginx-deployment

    The output is similar to this:

    deployment.apps/nginx-deployment rolled back

    Alternatively, you can rollback to a specific revision by specifying it with --to-revision:

    kubectl rollout undo deployment/nginx-deployment --to-revision=2

    The output is similar to this:

    deployment.apps/nginx-deployment rolled back

    For more details about rollout related commands, read kubectl rollout.

    The Deployment is now rolled back to a previous stable revision. As you can see, a DeploymentRollback eventfor rolling back to revision 2 is generated from Deployment controller.

  2. Check if the rollback was successful and the Deployment is running as expected, run:

    kubectl get deployment nginx-deployment

    The output is similar to this:

    NAME READY UP-TO-DATE AVAILABLE AGEnginx-deployment 3/3 3 3 30m
  3. Get the description of the Deployment:

    kubectl describe deployment nginx-deployment

    The output is similar to this:

    Name: nginx-deploymentNamespace: defaultCreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500Labels: app=nginxAnnotations: deployment.kubernetes.io/revision=4 kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1Selector: app=nginxReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.16.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: <none>NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created)Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0

Scaling a Deployment

You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

The output is similar to this:

deployment.apps/nginx-deployment scaled

Assuming horizontal Pod autoscaling is enabledin your cluster, you can set up an autoscaler for your Deployment and choose the minimum and maximum number ofPods you want to run based on the CPU utilization of your existing Pods.

kubectl autoscale deployment/nginx-deployment --min=10 --max=15 --cpu-percent=80

The output is similar to this:

deployment.apps/nginx-deployment scaled

Proportional scaling

RollingUpdate Deployments support running multiple versions of an application at the same time. When youor an autoscaler scales a RollingUpdate Deployment that is in the middle of a rollout (either in progressor paused), the Deployment controller balances the additional replicas in the existing activeReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called proportional scaling.

For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.

  • Ensure that the 10 replicas in your Deployment are running.

    kubectl get deploy

    The output is similar to this:

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx-deployment 10 10 10 10 50s
  • You update to a new image which happens to be unresolvable from inside the cluster.

    kubectl set image deployment/nginx-deployment nginx=nginx:sometag

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
  • The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to themaxUnavailable requirement that you mentioned above. Check out the rollout status:

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-deployment-1989198191 5 5 0 9snginx-deployment-618515232 8 8 8 1m
  • Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicasto 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't usingproportional scaling, all 5 of them would be added in the new ReplicaSet. With proportional scaling, youspread the additional replicas across all ReplicaSets. Bigger proportions go to the ReplicaSets with themost replicas and lower proportions go to ReplicaSets with less replicas. Any leftovers are added to theReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled up.

In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to thenew ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assumingthe new replicas become healthy. To confirm this, run:

kubectl get deploy

The output is similar to this:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx-deployment 15 18 7 8 7m

The rollout status confirms how the replicas were added to each ReplicaSet.

kubectl get rs

The output is similar to this:

NAME DESIRED CURRENT READY AGEnginx-deployment-1989198191 7 7 0 7mnginx-deployment-618515232 11 11 11 7m

Pausing and Resuming a rollout of a Deployment

When you update a Deployment, or plan to, you can pause rolloutsfor that Deployment before you trigger one or more updates. Whenyou're ready to apply those changes, you resume rollouts for theDeployment. This approach allows you toapply multiple fixes in between pausing and resuming without triggering unnecessary rollouts.

  • For example, with a Deployment that was created:

    Get the Deployment details:

    kubectl get deploy

    The output is similar to this:

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx 3 3 3 3 1m

    Get the rollout status:

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-2142116321 3 3 3 1m
  • Pause by running the following command:

    kubectl rollout pause deployment/nginx-deployment

    The output is similar to this:

    deployment.apps/nginx-deployment paused
  • Then update the image of the Deployment:

    kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

    The output is similar to this:

    deployment.apps/nginx-deployment image updated
  • Notice that no new rollout started:

    kubectl rollout history deployment/nginx-deployment

    The output is similar to this:

    deployments "nginx"REVISION CHANGE-CAUSE1 <none>
  • Get the rollout status to verify that the existing ReplicaSet has not changed:

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-2142116321 3 3 3 2m
  • You can make as many updates as you wish, for example, update the resources that will be used:

    kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi

    The output is similar to this:

    deployment.apps/nginx-deployment resource requirements updated

    The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates tothe Deployment will not have any effect as long as the Deployment rollout is paused.

  • Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates:

    kubectl rollout resume deployment/nginx-deployment

    The output is similar to this:

    deployment.apps/nginx-deployment resumed
  • Watch the status of the rollout until it's done.

    kubectl get rs --watch

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-2142116321 2 2 2 2mnginx-3926361531 2 2 0 6snginx-3926361531 2 2 1 18snginx-2142116321 1 2 2 2mnginx-2142116321 1 2 2 2mnginx-3926361531 3 2 1 18snginx-3926361531 3 2 1 18snginx-2142116321 1 1 1 2mnginx-3926361531 3 3 1 18snginx-3926361531 3 3 2 19snginx-2142116321 0 1 1 2mnginx-2142116321 0 1 1 2mnginx-2142116321 0 0 0 2mnginx-3926361531 3 3 3 20s
  • Get the status of the latest rollout:

    kubectl get rs

    The output is similar to this:

    NAME DESIRED CURRENT READY AGEnginx-2142116321 0 0 0 2mnginx-3926361531 3 3 3 28s

Note:

You cannot rollback a paused Deployment until you resume it.

Deployment status

A Deployment enters various states during its lifecycle. It can be progressing whilerolling out a new ReplicaSet, it can be complete, or it can fail to progress.

Progressing Deployment

Kubernetes marks a Deployment as progressing when one of the following tasks is performed:

  • The Deployment creates a new ReplicaSet.
  • The Deployment is scaling up its newest ReplicaSet.
  • The Deployment is scaling down its older ReplicaSet(s).
  • New Pods become ready or available (ready for at least MinReadySeconds).

When the rollout becomes “progressing”, the Deployment controller adds a condition with the followingattributes to the Deployment's .status.conditions:

  • type: Progressing
  • status: "True"
  • reason: NewReplicaSetCreated | reason: FoundNewReplicaSet | reason: ReplicaSetUpdated

You can monitor the progress for a Deployment by using kubectl rollout status.

Complete Deployment

Kubernetes marks a Deployment as complete when it has the following characteristics:

  • All of the replicas associated with the Deployment have been updated to the latest version you've specified, meaning anyupdates you've requested have been completed.
  • All of the replicas associated with the Deployment are available.
  • No old replicas for the Deployment are running.

When the rollout becomes “complete”, the Deployment controller sets a condition with the followingattributes to the Deployment's .status.conditions:

  • type: Progressing
  • status: "True"
  • reason: NewReplicaSetAvailable

This Progressing condition will retain a status value of "True" until a new rolloutis initiated. The condition holds even when availability of replicas changes (whichdoes instead affect the Available condition).

You can check if a Deployment has completed by using kubectl rollout status. If the rollout completedsuccessfully, kubectl rollout status returns a zero exit code.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 of 3 updated replicas are available...deployment "nginx-deployment" successfully rolled out

and the exit status from kubectl rollout is 0 (success):

echo $?
0

Failed Deployment

Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. This can occurdue to some of the following factors:

  • Insufficient quota
  • Readiness probe failures
  • Image pull errors
  • Insufficient permissions
  • Limit ranges
  • Application runtime misconfiguration

One way you can detect this condition is to specify a deadline parameter in your Deployment spec:(.spec.progressDeadlineSeconds). .spec.progressDeadlineSeconds denotes thenumber of seconds the Deployment controller waits before indicating (in the Deployment status) that theDeployment progress has stalled.

The following kubectl command sets the spec with progressDeadlineSeconds to make the controller reportlack of progress of a rollout for a Deployment after 10 minutes:

kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'

The output is similar to this:

deployment.apps/nginx-deployment patched

Once the deadline has been exceeded, the Deployment controller adds a DeploymentCondition with the followingattributes to the Deployment's .status.conditions:

  • type: Progressing
  • status: "False"
  • reason: ProgressDeadlineExceeded

This condition can also fail early and is then set to status value of "False" due to reasons as ReplicaSetCreateError.Also, the deadline is not taken into account anymore once the Deployment rollout completes.

See the Kubernetes API conventions for more information on status conditions.

Note:

Kubernetes takes no action on a stalled Deployment other than to report a status condition withreason: ProgressDeadlineExceeded. Higher level orchestrators can take advantage of it and act accordingly, forexample, rollback the Deployment to its previous version.

Note:

If you pause a Deployment rollout, Kubernetes does not check progress against your specified deadline.You can safely pause a Deployment rollout in the middle of a rollout and resume without triggeringthe condition for exceeding the deadline.

You may experience transient errors with your Deployments, either due to a low timeout that you have set ordue to any other kind of error that can be treated as transient. For example, let's suppose you haveinsufficient quota. If you describe the Deployment you will notice the following section:

kubectl describe deployment nginx-deployment

The output is similar to this:

<...>Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True ReplicaSetUpdated ReplicaFailure True FailedCreate<...>

If you run kubectl get deployment nginx-deployment -o yaml, the Deployment status is similar to this:

status: availableReplicas: 2 conditions: - lastTransitionTime: 2016-10-04T12:25:39Z lastUpdateTime: 2016-10-04T12:25:39Z message: Replica set "nginx-deployment-4262182780" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing - lastTransitionTime: 2016-10-04T12:25:42Z lastUpdateTime: 2016-10-04T12:25:42Z message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: 2016-10-04T12:25:39Z lastUpdateTime: 2016-10-04T12:25:39Z message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota: object-counts, requested: pods=1, used: pods=3, limited: pods=2' reason: FailedCreate status: "True" type: ReplicaFailure observedGeneration: 3 replicas: 2 unavailableReplicas: 2

Eventually, once the Deployment progress deadline is exceeded, Kubernetes updates the status and thereason for the Progressing condition:

Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing False ProgressDeadlineExceeded ReplicaFailure True FailedCreate

You can address an issue of insufficient quota by scaling down your Deployment, by scaling down othercontrollers you may be running, or by increasing quota in your namespace. If you satisfy the quotaconditions and the Deployment controller then completes the Deployment rollout, you'll see theDeployment's status update with a successful condition (status: "True" and reason: NewReplicaSetAvailable).

Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable

type: Available with status: "True" means that your Deployment has minimum availability. Minimum availability is dictatedby the parameters specified in the deployment strategy. type: Progressing with status: "True" means that your Deploymentis either in the middle of a rollout and it is progressing or that it has successfully completed its progress and the minimumrequired new replicas are available (see the Reason of the condition for the particulars - in our casereason: NewReplicaSetAvailable means that the Deployment is complete).

You can check if a Deployment has failed to progress by using kubectl rollout status. kubectl rollout statusreturns a non-zero exit code if the Deployment has exceeded the progression deadline.

kubectl rollout status deployment/nginx-deployment

The output is similar to this:

Waiting for rollout to finish: 2 out of 3 new replicas have been updated...error: deployment "nginx" exceeded its progress deadline

and the exit status from kubectl rollout is 1 (indicating an error):

echo $?
1

Operating on a failed deployment

All actions that apply to a complete Deployment also apply to a failed Deployment. You can scale it up/down, roll backto a previous revision, or even pause it if you need to apply multiple tweaks in the Deployment Pod template.

Clean up Policy

You can set .spec.revisionHistoryLimit field in a Deployment to specify how many old ReplicaSets forthis Deployment you want to retain. The rest will be garbage-collected in the background. By default,it is 10.

Note:

Explicitly setting this field to 0, will result in cleaning up all the history of your Deploymentthus that Deployment will not be able to roll back.

Canary Deployment

If you want to roll out releases to a subset of users or servers using the Deployment, youcan create multiple Deployments, one for each release, following the canary pattern described inmanaging resources.

Writing a Deployment Spec

As with all other Kubernetes configs, a Deployment needs .apiVersion, .kind, and .metadata fields.For general information about working with config files, seedeploying applications,configuring containers, and using kubectl to manage resources documents.

When the control plane creates new Pods for a Deployment, the .metadata.name of theDeployment is part of the basis for naming those Pods. The name of a Deployment must be a validDNS subdomainvalue, but this can produce unexpected results for the Pod hostnames. For best compatibility,the name should follow the more restrictive rules for aDNS label.

A Deployment also needs a .spec section.

Pod Template

The .spec.template and .spec.selector are the only required fields of the .spec.

The .spec.template is a Pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind.

In addition to required fields for a Pod, a Pod template in a Deployment must specify appropriatelabels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See selector.

Only a .spec.template.spec.restartPolicy equal to Always isallowed, which is the default if not specified.

Replicas

.spec.replicas is an optional field that specifies the number of desired Pods. It defaults to 1.

Should you manually scale a Deployment, example via kubectl scale deployment deployment --replicas=X, and then you update that Deployment based on a manifest(for example: by running kubectl apply -f deployment.yaml),then applying that manifest overwrites the manual scaling that you previously did.

If a HorizontalPodAutoscaler (or anysimilar API for horizontal scaling) is managing scaling for a Deployment, don't set .spec.replicas.

Instead, allow the Kubernetescontrol plane to manage the.spec.replicas field automatically.

Selector

.spec.selector is a required field that specifies a label selectorfor the Pods targeted by this Deployment.

.spec.selector must match .spec.template.metadata.labels, or it will be rejected by the API.

In API version apps/v1, .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. So they must be set explicitly. Also note that .spec.selector is immutable after creation of the Deployment in apps/v1.

A Deployment may terminate Pods whose labels match the selector if their template is differentfrom .spec.template or if the total number of such Pods exceeds .spec.replicas. It brings up newPods with .spec.template if the number of Pods is less than the desired number.

Note:

You should not create other Pods whose labels match this selector, either directly, by creatinganother Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If youdo so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this.

If you have multiple controllers that have overlapping selectors, the controllers will fight with eachother and won't behave correctly.

Strategy

.spec.strategy specifies the strategy used to replace old Pods by new ones..spec.strategy.type can be "Recreate" or "RollingUpdate". "RollingUpdate" isthe default value.

Recreate Deployment

All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.

Note:

This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Podsof the old revision will be terminated immediately. Successful removal is awaited before any Pod of the newrevision is created. If you manually delete a Pod, the lifecycle is controlled by the ReplicaSet and thereplacement will be created immediately (even if the old Pod is still in a Terminating state). If you need an"at most" guarantee for your Pods, you should consider using aStatefulSet.

Rolling Update Deployment

The Deployment updates Pods in a rolling updatefashion when .spec.strategy.type==RollingUpdate. You can specify maxUnavailable and maxSurge to controlthe rolling update process.

Max Unavailable

.spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum numberof Pods that can be unavailable during the update process. The value can be an absolute number (for example, 5)or a percentage of desired Pods (for example, 10%). The absolute number is calculated from percentage byrounding down. The value cannot be 0 if .spec.strategy.rollingUpdate.maxSurge is 0. The default value is 25%.

For example, when this value is set to 30%, the old ReplicaSet can be scaled down to 70% of desiredPods immediately when the rolling update starts. Once new Pods are ready, old ReplicaSet can be scaleddown further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods availableat all times during the update is at least 70% of the desired Pods.

Max Surge

.spec.strategy.rollingUpdate.maxSurge is an optional field that specifies the maximum number of Podsthat can be created over the desired number of Pods. The value can be an absolute number (for example, 5) or apercentage of desired Pods (for example, 10%). The value cannot be 0 if MaxUnavailable is 0. The absolute numberis calculated from the percentage by rounding up. The default value is 25%.

For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when therolling update starts, such that the total number of old and new Pods does not exceed 130% of desiredPods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that thetotal number of Pods running at any time during the update is at most 130% of desired Pods.

Here are some Rolling Update Deployment examples that use the maxUnavailable and maxSurge:

  • Max Unavailable
  • Max Surge
  • Hybrid
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1
apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1

Progress Deadline Seconds

.spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you wantto wait for your Deployment to progress before the system reports back that the Deployment hasfailed progressing - surfaced as a condition with type: Progressing, status: "False".and reason: ProgressDeadlineExceeded in the status of the resource. The Deployment controller will keepretrying the Deployment. This defaults to 600. In the future, once automatic rollback will be implemented, the Deploymentcontroller will roll back a Deployment as soon as it observes such a condition.

If specified, this field needs to be greater than .spec.minReadySeconds.

Min Ready Seconds

.spec.minReadySeconds is an optional field that specifies the minimum number of seconds for which a newlycreated Pod should be ready without any of its containers crashing, for it to be considered available.This defaults to 0 (the Pod will be considered available as soon as it is ready). To learn more about whena Pod is considered ready, see Container Probes.

Revision History Limit

A Deployment's revision history is stored in the ReplicaSets it controls.

.spec.revisionHistoryLimit is an optional field that specifies the number of old ReplicaSets to retainto allow rollback. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.

More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up.In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.

Paused

.spec.paused is an optional boolean field for pausing and resuming a Deployment. The only difference betweena paused Deployment and one that is not paused, is that any changes into the PodTemplateSpec of the pausedDeployment will not trigger new rollouts as long as it is paused. A Deployment is not paused by default whenit is created.

What's next

Deployments (2024)
Top Articles
Living things
When to Replace Bulletproof Glass? - Total Security Solutions
Dainty Rascal Io
Cold Air Intake - High-flow, Roto-mold Tube - TOYOTA TACOMA V6-4.0
Craigslist Portales
Mama's Kitchen Waynesboro Tennessee
Mustangps.instructure
How to Watch Braves vs. Dodgers: TV Channel & Live Stream - September 15
Elizabethtown Mesothelioma Legal Question
Apne Tv Co Com
Simplify: r^4+r^3-7r^2-r+6=0 Tiger Algebra Solver
WEB.DE Apps zum mailen auf dem SmartPhone, für Ihren Browser und Computer.
St Clair County Mi Mugshots
yuba-sutter apartments / housing for rent - craigslist
Greyson Alexander Thorn
E32 Ultipro Desktop Version
Craigslist Illinois Springfield
Wsbtv Fish And Game Report
Klsports Complex Belmont Photos
Paris Immobilier - craigslist
As families searched, a Texas medical school cut up their loved ones
Creed 3 Showtimes Near Island 16 Cinema De Lux
San Jac Email Log In
Miller Plonka Obituaries
How to Use Craigslist (with Pictures) - wikiHow
Progressbook Newark
Sam's Club Gas Price Hilliard
How Much Is An Alignment At Costco
Wega Kit Filtros Fiat Cronos Argo 1.8 E-torq + Aceite 5w30 5l
Gr86 Forums
Scioto Post News
Shnvme Com
How to Play the G Chord on Guitar: A Comprehensive Guide - Breakthrough Guitar | Online Guitar Lessons
What Time Is First Light Tomorrow Morning
Mandy Rose - WWE News, Rumors, & Updates
Skip The Games Grand Rapids Mi
Craigslist En Brownsville Texas
20 bank M&A deals with the largest target asset volume in 2023
The Listings Project New York
Tripadvisor Vancouver Restaurants
Former Employees
Ds Cuts Saugus
Disassemble Malm Bed Frame
[Teen Titans] Starfire In Heat - Chapter 1 - Umbrelloid - Teen Titans
Tom Kha Gai Soup Near Me
The Cutest Photos of Enrique Iglesias and Anna Kournikova with Their Three Kids
Unblocked Games 6X Snow Rider
Product Test Drive: Garnier BB Cream vs. Garnier BB Cream For Combo/Oily Skin
Cvs Minute Clinic Women's Services
Renfield Showtimes Near Regal The Loop & Rpx
4015 Ballinger Rd Martinsville In 46151
Saw X (2023) | Film, Trailer, Kritik
Latest Posts
Article information

Author: Laurine Ryan

Last Updated:

Views: 6271

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Laurine Ryan

Birthday: 1994-12-23

Address: Suite 751 871 Lissette Throughway, West Kittie, NH 41603

Phone: +2366831109631

Job: Sales Producer

Hobby: Creative writing, Motor sports, Do it yourself, Skateboarding, Coffee roasting, Calligraphy, Stand-up comedy

Introduction: My name is Laurine Ryan, I am a adorable, fair, graceful, spotless, gorgeous, homely, cooperative person who loves writing and wants to share my knowledge and understanding with you.