Jumat, 17 September 2021

How to upgrade Vmware Esxi 6.5 to 6.7

  Tidak ada komentar

Execute command bellow

[[email protected]:~] esxcli software vib update -d /vmfs/volumes/datastore1/Upgrade\ Esxi/VMware-ESXi-6.7.0-8169922-LNV-20180408.zip
Installation Result
   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
   Reboot Required: true
   VIBs Installed: ELX_bootbank_elx-esx-libelxima.so_11.4.1210.0-03, EMU_bootbank_brcmfcoe_11.4.1216.0-1OEM.650.0.0.4598673, EMU_bootbank_elxiscsi_11.4.1210.0-1OEM.650.0.0.4598673, EMU_bootbank_elxnet_11.4.1205.0-1OEM.650.0.0.4598673, Lenovo_bootbank_lnvcustomization_6.70-20.1, VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.670.0.0.8169922, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.670.0.0.8169922, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-via_0.3.3-2vmw.670.0.0.8169922, VMW_bootbank_block-cciss_3.6.14-10vmw.670.0.0.8169922, VMW_bootbank_bnxtnet_20.6.101.7-11vmw.670.0.0.8169922, VMW_bootbank_char-random_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.670.0.0.8169922, VMW_bootbank_hid-hid_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.670.0.0.8169922, VMW_bootbank_lpfc_11.4.33.1-6vmw.670.0.0.8169922, VMW_bootbank_lsi-mr3_7.702.13.00-4vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt2_20.00.04.00-4vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt35_03.00.01.00-10vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.670.0.0.8169922, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.670.0.0.8169922, VMW_bootbank_misc-drivers_6.7.0-0.0.8169922, VMW_bootbank_mtip32xx-native_3.9.6-1vmw.670.0.0.8169922, VMW_bootbank_ne1000_0.8.3-4vmw.670.0.0.8169922, VMW_bootbank_nenic_1.0.11.0-1vmw.670.0.0.8169922, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.670.0.0.8169922, VMW_bootbank_net-cdc-ether_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.670.0.0.8169922, VMW_bootbank_net-e1000_8.0.3.1-5vmw.670.0.0.8169922, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.670.0.0.8169922, VMW_bootbank_net-enic_2.1.2.38-2vmw.670.0.0.8169922, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.670.0.0.8169922, VMW_bootbank_net-forcedeth_0.61-2vmw.670.0.0.8169922, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-nx-nic_5.0.621-5vmw.670.0.0.8169922, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922, VMW_bootbank_net-usbnet_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.0.0.8169922, VMW_bootbank_nhpsa_2.0.22-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-core_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-en_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-rdma_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx5-core_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_ntg3_4.1.3.0-1vmw.670.0.0.8169922, VMW_bootbank_nvme_1.2.1.34-1vmw.670.0.0.8169922, VMW_bootbank_nvmxnet3_2.0.0.27-1vmw.670.0.0.8169922, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_pvscsi_0.1-2vmw.670.0.0.8169922, VMW_bootbank_qedentv_2.0.6.4-8vmw.670.0.0.8169922, VMW_bootbank_qfle3_1.0.50.11-9vmw.670.0.0.8169922, VMW_bootbank_qflge_1.1.0.11-1vmw.670.0.0.8169922, VMW_bootbank_sata-ahci_3.0-26vmw.670.0.0.8169922, VMW_bootbank_sata-ata-piix_2.12-10vmw.670.0.0.8169922, VMW_bootbank_sata-sata-nv_3.5-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-promise_2.12-3vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil24_1.1-1vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil_2.3-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-svw_2.3-3vmw.670.0.0.8169922, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.670.0.0.8169922, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.670.0.0.8169922, VMW_bootbank_scsi-aic79xx_3.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.670.0.0.8169922, VMW_bootbank_scsi-hpsa_6.0.0.84-3vmw.670.0.0.8169922, VMW_bootbank_scsi-ips_7.12.05-4vmw.670.0.0.8169922, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.670.0.0.8169922, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.670.0.0.8169922, VMW_bootbank_scsi-mpt2sas_19.00.00.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.670.0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-3-0_6.7.0-0.0.8169922, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usbcore-usb_1.0-3vmw.670.0.0.8169922, VMW_bootbank_vmkata_0.1-1vmw.670.0.0.8169922, VMW_bootbank_vmkplexer-vmkplexer_6.7.0-0.0.8169922, VMW_bootbank_vmkusb_0.1-1vmw.670.0.0.8169922, VMW_bootbank_vmw-ahci_1.2.0-6vmw.670.0.0.8169922, VMW_bootbank_xhci-xhci_1.0-3vmw.670.0.0.8169922, VMware_bootbank_cpu-microcode_6.7.0-0.0.8169922, VMware_bootbank_esx-base_6.7.0-0.0.8169922, VMware_bootbank_esx-dvfilter-generic-fastpath_6.7.0-0.0.8169922, VMware_bootbank_esx-ui_1.25.0-7872652, VMware_bootbank_esx-xserver_6.7.0-0.0.8169922, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-13vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-12vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-8vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-9vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-7vmw.670.0.0.8169922, VMware_bootbank_native-misc-drivers_6.7.0-0.0.8169922, VMware_bootbank_qlnativefc_3.0.1.0-5vmw.670.0.0.8169922, VMware_bootbank_rste_2.0.2.0088-7vmw.670.0.0.8169922, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-0.0.8169922, VMware_bootbank_vsan_6.7.0-0.0.8169922, VMware_bootbank_vsanhealth_6.7.0-0.0.8169922, VMware_locker_tools-light_10.2.0.7253323-8169922
   VIBs Removed: Avago_bootbank_lsi-mr3_7.700.24.00-1OEM.650.0.0.4598673, Avago_bootbank_lsi-msgpt35_01.00.04.00-1OEM.650.0.0.4598673, BCM_bootbank_bnxtnet_20.6.101.0-1OEM.650.0.0.4598673, ELX_bootbank_elx-esx-libelxima.so_11.2.1152.0-03, EMU_bootbank_brcmfcoe_11.2.1153.13-1OEM.650.0.0.4240417, EMU_bootbank_elxiscsi_11.2.1152.0-1OEM.650.0.0.4240417, EMU_bootbank_elxnet_11.2.1149.0-1OEM.650.0.0.4240417, EMU_bootbank_lpfc_11.2.156.20-1OEM.650.0.0.4240417, Lenovo_bootbank_lnvcustomization_6.5-02, MEL_bootbank_nmlx4-core_3.15.5.5-1OEM.600.0.0.2768847, MEL_bootbank_nmlx4-en_3.15.5.5-1OEM.600.0.0.2768847, MEL_bootbank_nmlx4-rdma_3.15.5.5-1OEM.600.0.0.2768847, MEL_bootbank_nmlx5-core_4.16.8.8-1OEM.650.0.0.4598673, QLC_bootbank_qfle3_1.0.28.0-1OEM.650.0.0.4240417, QLogic_bootbank_qlnativefc_2.1.50.0-1OEM.600.0.0.2768847, VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-0.14.5146846, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-11vmw.650.0.14.5146846, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nvme_1.2.0.32-2vmw.650.0.0.4564106, VMW_bootbank_nvmxnet3_2.0.0.22-1vmw.650.0.0.4564106, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.0.0.4564106, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_sata-ahci_3.0-22vmw.650.0.0.4564106, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.0.0.4564106, VMW_bootbank_vmkata_0.1-1vmw.650.0.0.4564106, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.0.14.5146846, VMW_bootbank_vmw-ahci_1.0.0-34vmw.650.0.14.5146846, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-0.14.5146846, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-0.0.4564106, VMware_bootbank_esx-tboot_6.5.0-0.0.4564106, VMware_bootbank_esx-ui_1.15.0-5069532, VMware_bootbank_esx-xserver_6.5.0-0.0.4564106, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-3vmw.650.0.0.4564106, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-7vmw.650.0.0.4564106, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-6vmw.650.0.0.4564106, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-7vmw.650.0.0.4564106, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-5vmw.650.0.0.4564106, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-0.0.4564106, VMware_bootbank_vsan_6.5.0-0.14.5146846, VMware_bootbank_vsanhealth_6.5.0-0.14.5146846, VMware_locker_tools-light_6.5.0-0.0.4564106
   VIBs Skipped: INT_bootbank_i40en_1.5.6-1OEM.650.0.0.4598673, INT_bootbank_igbn_1.4.1-1OEM.600.0.0.2768847, INT_bootbank_ixgben_1.6.5-1OEM.600.0.0.2768847, VMW_bootbank_iavmd_1.2.0.1011-2vmw.670.0.0.8169922, VMW_bootbank_iser_1.0.0.0-1vmw.670.0.0.8169922, VMW_bootbank_lpnic_11.4.59.0-1vmw.670.0.0.8169922, VMW_bootbank_net-bnx2x_1.78.80.v60.12-2vmw.670.0.0.8169922, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.670.0.0.8169922, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.670.0.0.8169922, VMW_bootbank_nmlx5-rdma_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nvmxnet3-ens_2.0.0.21-1vmw.670.0.0.8169922, VMW_bootbank_qcnic_1.0.2.0.4-1vmw.670.0.0.8169922, VMW_bootbank_qfle3f_1.0.25.0.2-14vmw.670.0.0.8169922, VMW_bootbank_qfle3i_1.0.2.3.9-3vmw.670.0.0.8169922, VMW_bootbank_smartpqi_1.0.1.553-10vmw.670.0.0.8169922, VMW_bootbank_vmkfcoe_1.0.0.0-1vmw.670.0.0.8169922
[[email protected]:~]
[[email protected]:~] esxcli --version
Script 'esxcli' version: 6.5.0
Reboot the server
[[email protected]:~] reboot

Kamis, 29 Juli 2021

Rancher Rodeo XII - RKE - Helm3

  Tidak ada komentar

 Introduction

Welcome to the Rancher Rodeo RKE Edition.

In this scenario, we will be walking through installing Rancher in HA mode and deploying several workloads to a cluster provisioned by Rancher.

This scenario will be following the general HA installation instructions available here: High Availability (HA) Install

We will be using two virtual machines today, cluster01 and rancher01 which are located in the tabs in the panel to the right. rancher01 will run a Kubernetes cluster and Rancher, and cluster01 will run a Kubernetes cluster and the corresponding user workloads.

Note that there are two separate Kubernetes clusters at play here, the Rancher Kubernetes Cluster is dedicated to running Rancher, while the Workload Cluster is managed by Rancher and runs on a separate virtual machine.

Important Note: HobbyFarm will tear down your provisioned resources within 10 minutes if your laptop goes to sleep or you navigate off of the HobbyFarm page. Please ensure that you do not do this, for example, during lunch or you will need to restart your scenario.

There is Pause/Resume functionality built into HobbyFarm that will allow you to pause your scenario should you have to put your laptop to sleep temporarily. Please note that pausing the scenario will not extend the end of your resources beyond this Rodeo session.

Generate an SSH Keypair for use with RKE

To start out, we will generate a new SSH Keypair and place this keypair on the node we will install Kubernetes for Rancher onto. As we will be using the rancher01 node to run Rancher + Kubernetes, we will simply copy the public key into the authorized_keys file of this node.

The following command will generate the keypair and copy it into the file.

ssh-keygen -b 2048 -t rsa -f \
/home/ubuntu/.ssh/id_rsa -N ""
cat /home/ubuntu/.ssh/id_rsa.pub \
>> /home/ubuntu/.ssh/authorized_keys
 Click to run on Rancher01

Download RKE

Rancher Kubernetes Engine (RKE) is an extremely simple, lightning fast Kubernetes installer that works everywhere.

In this step, we will download the RKE CLI to the Rancher01 node.

sudo wget -O /usr/local/bin/rke \
https://github.com/rancher/rke/releases/download/v1.2.1/rke_linux-amd64
 Click to run on Rancher01

In order to execute RKE, we need to mark it as executable.

sudo chmod +x /usr/local/bin/rke
 Click to run on Rancher01

Next, let's validate that RKE is installed properly:

rke -v
 Click to run on Rancher01

You should have an output similar to:

rke version v1.2.1

If you receive the output as expected, you can continue on to the next step.

Install kubectl

In order to interact with our Kubernetes cluster after we install it using rke, we need to install kubectl

The following command will add an apt repository and install kubectl.

sudo curl -L https://dl.k8s.io/release/v1.21.0/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
 Click to run on Rancher01

After the curl command finishes, we can test kubectl and make sure it is properly installed.

kubectl version --client
 Click to run on Rancher01

Install Helm

Helm 3 is a very popular package manager for Kubernetes. It is used as the installation tool for Rancher when deploying Rancher onto a Kubernetes cluster. In order to download Helm, we need to download the Helm tar.gz, move it into the appropriate directory, and mark it as executable. Finally, we will clean up the helm artifacts that are not necessary.

sudo wget -O helm.tar.gz \
https://get.helm.sh/helm-v3.4.0-linux-amd64.tar.gz
sudo tar -zxf helm.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
sudo chmod +x /usr/local/bin/helm
sudo rm -rf linux-amd64
sudo rm -f helm.tar.gz
 Click to run on Rancher01

After a successful installation of Helm, we should check our installation to ensure that we are ready to install Rancher.

helm version --client
 Click to run on Rancher01

Create a rancher-cluster.yml file

RKE CLI uses a YAML-formatted file to describe the configuration of our cluster. The following command will heredoc write the corresponding file onto your rancher01 node, so that RKE will be able to install Kubernetes.

RKE uses SSH tunneling, which is why we generated the keypair in the first part of this scenario.

cat << EOF > rancher-cluster.yml
nodes:
  - address: 52.54.77.218
    internal_address: 172.31.7.230
    user: ubuntu
    role: [controlplane,etcd,worker]
addon_job_timeout: 120
EOF
 Click to run on Rancher01

Run rke up

We are now ready to run rke up to install Kubernetes onto our Rancher01 node.

The following command will run rke up which will install Kubernetes onto our node.

rke up --config rancher-cluster.yml
 Click to run on Rancher01

Testing your cluster

RKE will have generated two important files:

  • kube_config_rancher-cluster.yml
  • rancher-cluster.clusterstate

in addition to your

  • rancher-cluster.yml

All of these files are extremely important for future maintenance of our cluster. When running rke on your own machines to install Kubernetes/Rancher, you must make sure you have current copies of all 3 files otherwise you can run into errors when running rke up.

The kube_config_rancher-cluster.yml file will contain a kube-admin kubernetes context that can be used to interact with your Kubernetes cluster that you've installed Rancher on.

We can soft symlink the kube_config_rancher-cluster.yml file to our /home/ubuntu/.kube/config file so that kubectl can interact with our cluster:

mkdir -p /home/ubuntu/.kube
ln -s /home/ubuntu/kube_config_rancher-cluster.yml /home/ubuntu/.kube/config
 Click to run on Rancher01

In order to test that we can properly interact with our cluster, we can execute two commands:

kubectl get nodes
 Click to run on Rancher01
kubectl get pods --all-namespaces
 Click to run on Rancher01

Install cert-manager

cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources.

The following set of steps will install cert-manager which will be used to manage the TLS certificates for Rancher.

The following command will apply the cert-manager custom resource definitions as well as label the namespace that cert-manager runs in to disable validation.

kubectl create namespace cert-manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.crds.yaml
 Click to run on Rancher01

Next, we'll add the helm repository for Jetstack

helm repo add jetstack https://charts.jetstack.io
 Click to run on Rancher01

Update your helm repository cache

helm repo update
 Click to run on Rancher01

Now, we can install cert-manager version 0.15.0

helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.15.0
 Click to run on Rancher01

Once the helm chart has installed, you can monitor the rollout status of both cert-manager and cert-manager-webhook

kubectl -n cert-manager rollout status deploy/cert-manager
 Click to run on Rancher01

You should eventually receive output similar to:

Waiting for deployment "cert-manager" rollout to finish: 0 of 1 updated replicas are available...

deployment "cert-manager" successfully rolled out

kubectl -n cert-manager rollout status deploy/cert-manager-webhook
 Click to run on Rancher01

You should eventually receive output similar to:

Waiting for deployment "cert-manager-webhook" rollout to finish: 0 of 1 updated replicas are available...

deployment "cert-manager-webhook" successfully rolled out

Install Rancher

We will now install Rancher in HA mode onto our Rancher01 Kubernetes cluster. The following command will add rancher-latest as a helm repository and update our local repository cache.

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo update
 Click to run on Rancher01

Next, we need to create the cattle-system namespace in our Kubernetes cluster to install Rancher into.

kubectl create namespace cattle-system
 Click to run on Rancher01

Finally, we can install Rancher using our helm install command.

helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.52.54.77.218.on.hobbyfarm.io \
  --set replicas=1
 Click to run on Rancher01

Verify Rancher is Ready to Access

Before we access Rancher, we need to make sure that cert-manager has signed a certificate using the cattle-ca in order to make sure our connection to Rancher does not get interrupted. The following bash script will check for the certificate we are looking for.

while true; do curl -kv https://rancher.52.54.77.218.on.hobbyfarm.io 2>&1 | grep -q "dynamiclistener-ca"; if [ $? != 0 ]; then echo "Rancher isn't ready yet"; sleep 5; continue; fi; break; done; echo "Rancher is Ready";
 Click to run on Rancher01


Accessing Rancher

Note: Rancher may not immediately be available at the link below, as it may be starting up still. Please continue to refresh until Rancher is available.

  1. Access Rancher Server at https://rancher.52.54.77.218.on.hobbyfarm.io
  2. Enter a password for the default admin user when prompted.
  3. Select the default view of "I want to create or manage multiple clusters"
  4. Make sure to agree to the Terms & Conditions
  5. When prompted, the Rancher Server URL should be rancher.52.54.77.218.on.hobbyfarm.io, which is the hostname you used to access the server.
  6. Once you log in, you'll see a message similar to "Waiting for server-url to be set". Click the ellipses on the right of the local cluster, click Edit, then click Save.

You will see the Rancher UI, with the local cluster in it. The local cluster is the cluster where Rancher itself runs, and should not be used for deploying your demo workloads.

Configure Wildcard DNS Domain

For testing purposes, Rancher can integrate with Wildcard DNS services. This allows you to quickly create a publicly resolvable DNS entry that points to the ingress controller of a cluster that is managed by Rancher.

To configure the Wildcard DNS service:

  • Go to Settings
  • Edit the ingress-ip-domain setting
  • Change the value to sslip.io

    Creating A Kubernetes Lab Cluster within Rancher

    In this step, we will be creating a Kubernetes Lab environment within Rancher. Normally, in a production case, you would create a Kubernetes Cluster with multiple nodes; however, with this lab environment, we will only be using one virtual machine for the cluster.

    1. Hover over the top left dropdown, then click Global
      • The current context is shown in the upper left, and should say 'Global'
    2. Click Add Cluster
      • Note the multiple types of Kubernetes cluster Rancher supports. We will be using Existing nodes for this lab, but there are a lot of possibilities with Rancher.
    3. Click on the Existing nodes Cluster box
    4. Enter a name in the Cluster Name box
    5. Set the Kubernetes Version to a v1.18 version
    6. Click Next at the bottom.
    7. Make sure the boxes etcdControl Plane, and Worker are all ticked.
    8. Click Show advanced options to the bottom right of the Worker checkbox
    9. Enter the Public Address (3.236.182.92) and Internal Address (172.31.0.15)
      • IMPORTANT: It is VERY important that you use the correct External and Internal addresses from the Cluster01 machine for this step, and run it on the correct machine. Failure to do this will cause the future steps to fail.
    10. Click the clipboard to Copy to Clipboard the docker run command
    11. Proceed to the next step of this scenario

      Start the Rancher Kubernetes Cluster Bootstrapping Process

      IMPORTANT NOTE: Make sure you have selected the Cluster01 tab in HobbyFarm in the window to the right. If you run this command on Rancher01 you will cause problems for your scenario session.

      1. Take the copied docker command and run it on Cluster01
      2. Once the docker run command is complete, you should see a message similiar to 1 node has registered
      3. Within the Rancher UI click on <YOUR_CLUSTER_NAME> which is the name you entered during cluster creation.
      4. You can watch the state of the cluster as your Kubernetes node Cluster01 registers with Rancher here as well as the Nodes tab
      5. Your cluster state on the Global page will change to Active
      6. Once your cluster has gone to Active you can click on it and start exploring.

        Interacting with the Kubernetes Cluster

        In this step, we will be showing basic interaction with our Kubernetes cluster.

        1. Click into your newly active cluster.
        2. Note the three dials, which illustrate cluster capacity.
        3. Click the Launch kubectl button in the top right corner of the Cluster overview page, and enter kubectl get pods --all-namespaces and observe the fact that you can interact with your Kubernetes cluster using kubectl.
        4. Also take note of the Kubeconfig File button which will generate a Kubeconfig file that can be used from your local desktop or within your deployment pipelines.
        5. Click the Ellipses in the top right corner and note the various operational options available for your cluster. We will be exploring these in a later step.

          Enable Rancher Monitoring

          To deploy the Rancher Monitoring feature, we will need to navigate to the Cluster Explorer.

          1. On your newly-created cluster, click the "Explorer" button to open the Cluster Explorer.
          2. Once the Cluster Explorer loads, use the dropdown in the upper-left section of the page, and navigate to "Apps & Marketplace."
          3. Locate the "Monitoring" chart, and click on it
          4. Select "Chart Options" on the left. Change Resource Limits > Requested CPU from 750m to 250m. This is required because our scenario virtual machine has limited CPU available.
          5. Click "Install" at the bottom of the page, and wait for the helm install operation to complete.

          Once Monitoring has been installed, you can click on that application under "Installed Apps" to view the various resources that were deployed.

          Working with Rancher Monitoring

          Once Rancher Monitoring has been deployed, we can view the various components and interact with them.

          1. In the dropdown in the upper-left corner of the Cluster Explorer, select "Monitoring"
          2. On the Monitoring Dashboard page, identify the "Grafana" link. Clicking this will proxy you to the installed Grafana server

          Once you have opened Grafana, feel free to explore the various dashboard and visualizations that have been setup by default.

          These options can be customized (metrics and graphs), but doing so is out of the scope of this scenario.

          Create a Deployment And Service

          In this step, we will be creating a Kubernetes Deployment and Kubernetes Service for an arbitrary workload. For the purposes of this lab, we will be using the docker image rancher/hello-world:latest but you can use your own docker image if you have one for testing.

          When we deploy our container in a pod, we probably want to make sure it stays running in case of failure or other disruption. Pods by nature will not be replaced when they terminate, so for a web service or something we intend to be always running, we should use a Deployment.

          The deployment is a factory for pods, so you'll notice a lot of similairities with the Pod's spec. When a deployment is created, it first creates a replica set, which in turn creates pod objects, and then continues to supervise those pods in case one or more fails.

          Note: These steps will need to be executed within the Cluster Manager. To access, click on the Cluster Manager button at the top of the page.


          1. Hover over the Dropdown next to the Rancher logo in the top left corner, hover over your cluster name, then select Default as the project.
          2. Under the Workloads tab press Deploy in the top right corner and enter the following criteria:
            • Name - helloworld
            • Docker Image - rancher/hello-world:latest
            • Click Add Port and enter 80 for the container port
            • ** NOTE: ** Note the other capabilities you have for deploying your container. We won't be covering these in this Rodeo, but you have plenty of capabilities here.
          3. Scroll down and click Launch
          4. You should see one pod get deployed and a TCP endpoint under your workload name exposing the NodePort service that has been created.
          5. Click the Arrow next to your workload that you just created, then note the + and - buttons under the replica count to the right. You can click these correspondingly and refresh your browser on the nodeport to see the changes in the pod name.

            Create a Kubernetes Ingress

            In this step, we will be creating a Layer 7 ingress to access the workload we just deployed in the previous step. For this example, we will be using xip.io as a way to provide a DNS hostname for our workload. Rancher will automatically generate a corresponding workload IP.

            1. Hover over the Dropdown next to the Rancher logo in the top left corner, hover over your cluster name, then select Default as the project.
            2. In the Default project in the Workloads section, click on the Load Balancing tab
            3. Click Add Ingress and enter the following criteria:
              • Name - helloworld
              • Leave 'Automatically generate a .sslip.io hostname' selected
              • Click the minus button on the right to remove the empty backend rule
              • Click the + Service button
              • Pick the helloworld-nodeport service from the dropdown under Target
            4. Click Save and wait for the sslip.io hostname to register, you should see the rule become Active within a few minutes.
            5. Click on the hostname and browse to the workload.

            ** Note: ** You may receive transient 404/502/503 errors while the workload stabilizes. This is due to the fact that we did not set a proper readiness probe on the workload, so Kubernetes is simply assuming the workload is healthy.

            Upgrading your Kubernetes Cluster

            This step shows how easy it is to upgrade your Kubernetes clusters within Rancher.

            1. Hover over the Dropdown next to the Rancher logo in the top left corner, then select your cluster.
            2. Click the ellipses (...) next to the Kubeconfig file button
            3. Click Edit
            4. Scroll down and select the dropdown under the Kubernetes Version
            5. Select a newer version of Kubernetes
            6. Scroll down and hit Save

            Observe that your Kubernetes cluster will now be upgraded.

            Congratulations

            Congratulations, you have finished the Scenario. If you would like to tear down your lab environment, you can click the "Finish" button, otherwise, continue to work with your Kubernetes cluster while keeping HobbyFarm in the background.