Check CVM boot log In ESXi: /vmfs/volumes/NTNX-*/ServiceVM_Centos/ServiceVM_Centos.0.out Log on to another Controller VM in the cluster with SSH. Its probably best to point out here that before you do ANYTHING, call Support. When you check the bootup log in the CVM, you see messages about a duplicate IP. Save my name, email, and website in this browser for the next time I comment. Nutanix is a Hypervisor agnostic platform, it supports AHV, Hyper-V, ESXi and XEN. [email protected]$ acli. Your email address will not be published. Before you reboot the CVM you need to stop it gracefully. Nutanix is a Hypervisor agnostic platform, it supports AHV, Hyper-V, ESXi and XEN. Again, if it’s all too much, or you want to play it on the safe side, call Support. Caution: Verify the data resiliency status of your cluster. Enter your username or e-mail address. Please try again in a few minutes. Make a note of the Controller VM name in the second column. If the Controller VM is shut off, start it. SSH into the CVM, and issue the following command: Validate that the datastores, are available and connected to all hosts within the cluster, Using the vSphere client, place the ESXi host into maintenance mode. This makes it all the more important to read the following Nutanix KB, which details the steps required to gracefully shutdown and restart a Nutanix cluster with any of the hypervisors. From the command line, issue the following command: Login to another CVM, and issue the following commands: Ping the hypervisor IP and confirm that it is powered down. If the Controller VM is off, a line similar to the following should be returned: If the Controller VM is on, a line similar to the following should be returned: If the Controller VM is shut off, start it. That’s it. They are there for a reason and are very good at their job. SSH into CVM and issue the following command: Once the CVM is powered down, shutdown the host. It's a vSphere per-VM setting and I believe it is the recommended best practice.Out of curiosity, was the node in maintenance mode when you powered off? So, go ahead and SSH (or open a console) to your CVM. The cvm_shutdown -P now command will gracefully stop all services on the CVM allowing you to reboot the CVM (or the node if you need) cleanly. If the cluster is running properly, output similar to the following is displayed for each node in the cluster: Shut down guest VMs that are running on the node, or move them to other nodes in the cluster. ncli cluster status | grep -A 15 cvm_ip_addr, ~/serviceability/bin/esx-enter-maintenance-mode –s, host.exit_maintenance_mode AHV-hypervisor-IP-address. How to Shut Down a Cluster and Start it Again. 1. Nutanix commands for gracefully shutting down and starting an AHV node, Xi Leap – Native Cloud DR for Nutanix Clusters, Nutanix Technology Champion 2020 – Three years running. In the world of Nutanix, Controller VMs (CVMs) are king. Replace cvm_name with the name of the Controller VM that you found from the preceding command. Verify that all services are up on all Controller VMs. Once your CVM is back up you can initiate NCC to run some checks across your cluster to ensure everything is okay. This makes it all the more important to read the following Nutanix KB, which details the steps required to gracefully shutdown and restart a Nutanix cluster with any of the hypervisors . Nutanix KB : How to Shut Down a Cluster and Start it Again? Register/Unregister a Nutanix Cluster with Prism Central, Perform one or more Nutanix Cluster Checks, Using the vSphere client, take the ESXi hosts out of maintenance mode. Required fields are marked *. Sorry, we're still checking this file's contents to make sure it's safe to download. We'll send you an e-mail with instructions to reset your password. Login and get to the ncli. So when the time comes in which you need to restart a node or a CVM you should probably take a little care and do it properly. The cvm_shutdown -P now command will gracefully stop all services on the CVM allowing you to reboot the CVM (or the node if you need) cleanly. # Balance-slb algorithm is configured for each bond on all AHV nodes in the Nutanix cluster with the following commands (Ran from CVM): ssh [email protected] "ovs-vsctl set port bond0 bond_mode=balance-slb" . [email protected]$ acli host.enter_maintenance_mode Hypervisor address [wait=”{ true | false }” ], Your email address will not be published. It says that after setting the time zone using the ncli command, you need to restart all the CVM in serial as the cluster can only tolerate one CVM off at anyone time. No.#1 Build a new Nutanix cluster with at least 3 nodesIf you are planning to create new Nutanix cluster required at least three nodes for RF-2 clusterMake sure that all the CVMs, Nodes and IPMI IP addresses are reachable or ping able to each other. If the Controller VM is running, shut down the Controller VM. Upon a CVM (Controller VM)/host restart, you might see that the node is not coming back into the cluster. Shutting down and Restarting a Nutanix Cluster. All of my CVM's are configured to power on with host, as the NFS datastore on the node with a powered off CVM will remain inaccessible until the CVM is powered up. You don’t want to have it all go all banana on you and then leave you with a broken CVM. This will ensure you have no issues with your cluster when you reboot the CVM. [email protected]# virsh start cvm_name. HI everbody, I'm trying to upgrade acropolis from 4.6.0.2 to 4.6.1 and the pre-upgrade check is saying this: ClusterHealth service is down on X .X .X .X I run cluster status on this CVM and the service is down. Shutting down and Restarting a Nutanix Cluster requires some considerations and ensuring proper steps are followed - in order to bring up your VMs & data in a healthy and consistent state. Now all you need to do is: cvm_shutdown -P now. Stopping the CVM gracefully allows for all services to stop and, in the event this CVM is the leader, have the cluster elect a new leader. Sorry, our virus scanner detected that this file isn't safe to download. Save my name, email, and website in this browser for the next time I comment. How can i restart the ClusterHealth service? If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance mode. However, if you want to do this yourself, read on.