Skip to content

ISSU using Server Manager

kamleshp edited this page Jun 19, 2017 · 1 revision

With 4.0 it is possible to use the In Service Software upgrade using SM. The basic workflow is described below and here.

- provision new set of controllers for new cluster
- setup BGP peering between control nodes of old and new clusters
- Sync configuration between old and new cluster config nodes (API servers)
- Run active sync task between old and new controllers so that objects created in old cluster are also created in new cluster
- migrate all compute nodes from old cluster to new cluster, only vrouter and agent is reconfigured to point to new control nodes. The nova-compute is not upgraded or re-configured. WHile computes are being migrated the VM connectivity from computes in old cluster and new clusters is intact.
- tear down BGP peering and remove old controller node references from new cluster configuration
- upgrade openstack node and reconfigure the neutron endpoint to new controller, reconfigure nova-compute on compute nodes to point to new neutron ip and rabbit hosts

The above ISSU procedure can be triggered from SM as below:

  1. Ensure SM has old cluster and new clusters, server associated with old and new clusters and contrail images added. For more details, refer to SM wiki 4.0 provisioning. In the new cluster configuration, specify existing openstack server ip. The new cluster should only have controller nodes, no computes or openstack roles. Specify the provision_type ISSU in cluster configuration contrail_4 section, as shown below:

                     "contrail_4": {
                         "global_config": {
                         },
                         "provision_type": "ISSU"
                     },
    
  2. Execute command below from SM. This provisions new controllers as specified in new cluster. Cluster-v1 is old cluster, cluster-v2 is new cluster. The computes to be migrated to new cluster are specified using options "--all" (migrate all computes from old cluster) or "--tag "(migrate computes with specific tag) or "--server_id" (migrate server specified by server_id). With none of these options, no compute is migrated to the new cluster. The command help is available with "server-manager help issu".

    server-manager issu --cluster_id_old cluster-v1 --cluster_id_new cluster-v2 --new_image contrail_4_0_0_20 --all

Monitor the provision progress using "server-manager status <server_id>" command. The BGP peering and sync task is run once the provisioning of new cluster is complete. Look for ISSU related parameters in the new cluster using the command "server-manager display cluster --cluster_id cluster-v2 -d", the "issu_clusters_synced" should show true if two cluster are synced. If any computes are specified with above command, their status should show "provision_completed" and they should show up in new cluster (cluster-v2) now. Use the "server-manager display server --cluster_id cluster-v2" command to see servers belonging to cluster-v2.

  1. Issue finalize command to tear down BGP peering and clean up old controllers from the configuration, using the command below:

    server-manager issu-finalize --cluster_id_old cluster-v1 --cluster_id_new cluster-v2

On the completion of finalize, the new cluster should have "issu_finalized" as true. This can be confirmed using command "server-manager display cluster --cluster_id cluster-v2 -d".

  1. At this point Openstack nodes need to be re-configured to point to new neutron server, nova-compute need to reconfigure with new rabbit host and neutron server. If the SM is used to provision openstack, user can simply add openstack nodes to the new cluster (cluster-v2). Add in the cluster configuration contrail_4 section, option below and issue provision again this will provision openstack and nova-compute nodes with new contrail image.

                     "contrail_4": {
                         "global_config": {
                         },
                         "provision_type": "contrail_cloud"
                     },
    
  2. User has option to roll-back computes (downgrade and put them back in old cluster) if something goes wring with compute upgrade. Below is the command to use for compute roll-back. The command accepts as "--all" (all computes rolled back to old cluster), "--tag " (computes specified with tag are rolled back), "--server_id" (computes specified with server_id is rolled back). Ensure the provision_type in cluster configuration to be "ISSU".

    server-manager issu-rollback --cluster_id_old cluster-v1 --cluster_id_new cluster-v2 --old_image b32_38_mitaka --all

The "server-manager display server --select roles,id,cluster_id,status" can be used to check the server information. And "server-manager display cluster --cluster_id -d" for ISSU status.

Clone this wiki locally