FusionSphere V100R003C00 Quick Installation and Configuration Guide 01 PDF
FusionSphere V100R003C00 Quick Installation and Configuration Guide 01 PDF
FusionSphere V100R003C00 Quick Installation and Configuration Guide 01 PDF
V100R003C00
Quick Installation and Configuration
Issue 01
Date: 2013-08-15
FusionSphere
Feedback Send us your thoughts, suggestions, or any other feedback using the technical
support contact information listed above.
Component Description
Server A physical server. After the virtualization software is installed on a server, the server
functions as a host in the FusionCompute system, providing virtualized computing
resources, including virtual CPUs (VCPUs), memory, and network ports.
Storage device Connects to a host through network devices, provides virtual storage resources to VMs.
Switch A network device that connects a server and a storage device. It also connects to the
external network, thereby implementing internal and external communications for the
FusionSphere system.
FusionCompute A component of the virtualization software. FusionCompute consists of two hosts and the
virtualization resource management module, that is, active and standby virtualization
resource management (VRM) VMs. FusionCompute virtualizes physical resources and
creates and manages VMs.
FusionManager FusionSphere management software consisting of two FusionManager VMs (active and
standby). It manages and maintains virtual resources, physical resources, and services
in a centralized manner.
installation
Prerequisites process
You have obtained the integration design plan.
All hardware has been installed and is working properly.
You have configured network and storage devices based on the design plan.
2
Servers used in the FusionSphere system must meet the following requirements.
Item Requirement
Intel or AMD 64-bit CPU with virtualization function enabled
Hardware Memory ≥ 8 GB
Single disk space ≥ 360 GB
Use RAID 1 that consists of hard disks 1 and 2 to install an OS and software on
RAID
the host, thereby enhancing host reliability.
Boot device The first boot device for the host is set to Hard disk.
The local PC used for FusionSphere installation must meet the following requirements.
Item Requirement
CPU: 32-bit; memory ≥ 2 GB
Hardware
System disk free space ≥ 1GB; User disk free space ≥ 30 GB
Communication between the local PC and the management and host BMC planes
Network
is normal.
FusionCompute
FusionCompute host OS Visit http://support.huawei.com
V100R003C00SPC300_CNA.iso
and choose Enterprise >
FusionCompute installation FusionCompute Software > IT > FusionCloud >
wizard V100R003C00SPC300_Tools.zip FusionSphere >
FusionCompute >
FusionCompute
VRM VM template file V100R003C00SPC300
V100R003C00SPC300_VRM.zip
3
Check installation conditions Obtain software Obtain data
Learn
Learn installation process
installation
process
Before installation, finish the integration design plan and obtain all data required for installation by the plan.
The following figures show a sample networking plan. The table on the next page lists sample data for this
plan.
Management Service
cluster FusionManager floating IP address cluster
Active Standby
FusionManager FusionManager
Active
VRM Standby
VRM
VRM floating IP address
Management plane
Host
Storage plane
Service plane
eth0 (management NIC 1)
BMC plane
Management link
eth3 (storage NIC 2)
4 storage links
Controller B
eth4 (service NIC 1)
4
NOTE: If a parameter SN (such as A4) is mentioned in a procedure below, use the planned value for this
parameter.
(A2) Host BMC username and password Username: root Planned value: ____
Used to log in to the host BMC. Password: root ____
188.100.200.31 (active)/
(D1) Management IP addresses
188.100.200.32 (standby)
Management IP addresses of the active and standby VRMs.
Planned value:
D. VRM node Required during the VRM installation phase.
_______________ / ______________
information
(D2) Floating IP address
188.100.200.30
Management plane floating IP address of the active and
Planned value: __________________
standby VRMs. Required during the VRM installation phase.
5
Type Parameter Example and Planned Value
ServiceDVS
(G1) Name
Planned value: __________________
97-99
(G3) VLAN pool
Planned value: __________________
188.100.200.91 (active)/188.100.200.92
(H4) Management IP addresses and subnet
(standby)
mask of the active and standby
255.255.255.0
FusionManager VMs
Planned value: __________________
H. FusionManager 188.100.200.90
(H5) FusionManager floating IP address
node information Planned value: __________________
188.100.200.1
(H6) Management plane gateway address
Planned value: __________________
188.100.200.81
(H8) NTP clock source IP address
Planned value: __________________
Decompress FusionManager
V100R003C00SPC200_PhyDevAccessTem
(H9) Physical device information
plate.zip and enter the required data in
phyDevAccessTemplate_zh.
6
Check installation conditions Obtain software Obtain data Learn
Learn installation process
installation
process
Procedure
FusionSphere installation includes the installation and configuration of FusionCompute and FusionManager.
The following figure shows the procedure for installing FusionSphere.
Install FusionCompute.
Configure
FusionCompute.
Install FusionManager.
Configure
FusionManager.
End
In the following table, the left column describes the operation guidelines and the right column provides the
operation background, principles, and other descriptions. The configuration process includes the icons
pictured below. It is recommended to familiarize yourself with the meanings of the icons before proceeding.
Example:
Icon Description
1
1 2 3 Operation sequence 2
7
Host Installation — ISO
Note: The screenshots in the steps below include sample parameter values, which may vary
from the actual parameter values onsite.
3 Install FusionCompute
Host
3.1 Installation
Install an OS on the— ISO
Host
2. Open the web browser on the PC, then enter the BMC IP address into the address Huawei RH2285 is used as
bar to access the host BMC. an example host.
8
Step 2: Load a Host OS Installation File
2. Set DVD-ROM as the boot device. NOTE
The DVD-ROM name and the
configuration screen may vary
depending on the server used.
2
Use the
3 number pad
on the
keyboard to
enter numbers.
3. Configure the gateway for eth0 (A4) on the Network Configuration page. -
9
Step 3: Configure Host Information
4. Select Hostname, then set the host name. -
2
1
3
5. Select Timezone, then set the local time zone and time. -
1
3
6. Select Password, then set the password for user root (A6). NOTE
Enter the password carefully. The password must contain
at least:
• Eight characters
• One space or one of the
following special characters:
2 `~!@#$%^&*()-_=
+\|[{}];:'",<.>/?
1 • At least two of the following:
3
lowercase letters,
uppercase letters, and
numbers.
7. Configure Domain 0 if the FusionCompute works with the FusionAccess. For details, see Appendix 5.
3. If in the date plan, the two VRM hosts are used to create user VMs and the user TIP
VMs use Huawei SAN storage devices, set the multipathing mode to Huawei mode After OS installation on the two
by running commands after the OSs are installed on the two hosts. For details, see hosts is complete, you can
Appendix 6. install VRMs on the two hosts.
During VRM installation, you
You can set the multipathing mode for non-VRM hosts on the management web can install OSs on other hosts.
client.
10
3.2 Install the VRM
Step 1: Decompress the Software Packages
1. Decompress FusionCompute V100R003C00SPC200_Tools.zip onto the PC to -
obtain the Installer, ModCNABFileTool, PlugIn, and ToDPS folders.
2. Decompress FusionCompute V100R003C00SPC200_VRM.zip into a new folder The path of this folder cannot
to obtain two VRM VM template files whose file name extensions are .xml and .vhd, contain Chinese characters.
respectively. Share the folder to the current PC user. After sharing the folder, the
host can access to the VRM
VM template files.
2. Select a language. -
4. Click Next after the components are installed. The Configure Host page is
displayed.
11
Step 5: Configure a Data Store
1. Select Local disk/FC SAN, then click Next. You are advised to use local
disks to install VRMs.
2. After you refresh storage devices, click Add to add a data store. If you select local disks, the
active and standby VRM VMs
will use disks of different
hosts. Local disks are
independent from the network,
and therefore are highly
reliable.
12
Step 6: Configure VRM VM Information
VM to the right of By
deployment scale refer to the
VM to be created; PM refers
to the host deployed.
4. Click Verify Configuration and click Next after the verification succeeds. The configuration page for
the standby VRM is
displayed.
2. Click Verify Configuration and click Next after the verification succeeds. -
13
Step 9: Install the VRMs
2. Click the link on the displayed page and log in to FusionCompute. FusionCompute Login:
Enter the VRM floating IP
1 address in the browser.
Default username: admin
Password: Admin@123
4. Configure FusionCompute
Host
4.1. Installation
Create — ISO
a Service Cluster on a Site
1. Go to the Host and Cluster page. Add all hosts other than the
14
Step 1: Add Hosts
2. Go to the Add Host page. -
1
2
Right-click the 3
cluster.
3. Set the host parameters, then click OK to complete the adding. 1 Host information (A4 and
A5)
2 After you set the host
BMC information (A1 and
A2), you can power on or off
the host on FusionCompute.
1
1. Go to the page for binding network ports. If ports eth0 and eth1 are
both common ports on the
1 4 management plane of the
host, bind them in
5
active/standby mode to
improve network reliability.
2
6
3
2. Locate the row that contains Mgnt_Aggr, then open the Add Network Port dialog Mgnt_Aggr is a bound port
automatically created during
box. host OS installation that
contains the management
port eth0 you configured.
Select another management
port eth1 and bind it to eth0
1 as Mgnt-Aggr to improve
network reliability.
2
15
Step 2: Bind Host Management Ports
5. Bind eth0 and eth1 on the active and standby VRM hosts. For details, see 1 to 4. -
1. Open the Add Storage Resource dialog box. For details about network
1 communication between
hosts and storage devices,
see Appendix 3.
2
2. Set the storage device parameters (E1 to E3), then click Add. 3 The management IP
*Note: The values displayed in address specifies the storage
this figure are sample values. device.
1
4 5 A storage area
2
network (SAN) device
3 connects to a host through
5 4 multiple storage links. Enter
the information for all links at
the same time.
16
4.4 Add a Data Store to a Host (SAN Storage)
4
5
17
Step 2: Add Storage Ports to a Host
2. Select the storage ports, then click Next. The page for connection
PORTX indicates network port ethX on the host. configuration is displayed.
3. Set the storage port parameters (E4 and E5), then click Next. The page for information
conformation is displayed.
1
The IP address is used to
communicate with storage
devices. Set it to an idle IP
address on the storage plane.
3
4
1. Go to the Associate Storage Resource page of the host. After a storage device is
associated with a host, the
1 host can discover all storage
resources on the device.
2 For details about network
communication between
hosts and storage devices,
3 see Appendix 3.
2. Select an added storage resource, then click Association. The storage resource is
associated with the host.
Click OK on the displayed dialog box. All associated storage
resources are displayed on
the Storage Resource page.
1. Record the world wide name (WWN) of the host displayed in the bottom Storage devices identify
hosts using the WWN.
right corner of the Configuration > Storage Resource page. For details about network
communication between
hosts and storage devices,
see Appendix 3.
18
Step 4: Configure the Initiator
2. Enter the storage device management IP address in the address bar of the NOTE
browser, then press Enter. Load the storage device management software.
The initiator can be only
configured using the storage
device management software.
5. Enter the administrator username and password, then scan the storage device. 2 Storage IP address of the
storage device.
6. Enter logical host view, then click Initiator Configuration. 5 Select the logical host that
matches the logical unit
1 number (LUN). The selected
2 6 host in the figure is an
example.
3
5
4
19
Step 4: Configure the Initiator
7. Add the host WWN to the initiator. 2 Select the host WWN you
recorded in 1.
1
2
3
3. Select a LUN on a SAN device, then add it to the host as a data store. 2 Virtualized or Not:
A virtualized data store
1 supports advanced features,
such as thin provisioning
disks, snapshots, and live
storage migration, and
provides high resource
2 utilization. However, disk
creation on it is slow.
If you set this parameter to
3 Yes, you can select whether
to format the data store.
20
Host Installation — ISO Method
4.5 Add Virtual Network Resources to a Site
NOTE: If the service plane and the management plane are deployed on the same network plane, no service DVS is
required. Add a service plane VLAN pool to the management plane DVS. For details, see Appendix 7.
2. Set the DVS name (G1), then check the box next to Add uplink and Add VLAN If intelligent network interface
cards (iNICs) are used, set
pool. Switching type to Cut-through
1 mode.
3. Select a service port for each host on the Add Uplink page. An uplink is a physical port
on a host that connects to a
1
DVS. VMs connect to the
network through the physical
ports on the host.
Add all the hosts in a cluster
to the same uplink group on
a DVS. This allows VMs in
the cluster to have the most
migration options if a fault
occurs.
21
Step 1: Create a DVS
4. Set the DVS VLAN pool (G3) on the Add VLAN Pool page. A virtual local area network
(VLAN) pool defines the
1 allowed VLAN range for a
DVS.
During service provisioning,
create port groups on the
DVS for each service. This
allows each port group to use
2 VLANs from different VLAN
pools.
The VLAN pool set on the
DVS must be same as the
3 VLANs set on the physical
switch to which the host
connects.
4
3
1
22
6. Install FusionManager
Host Installation — ISO Method
6.1. Install FusionManager
Step 1: Create a FusionManager VM
1. Go to the page for creating VMs on FusionCompute. -
1
2
2: Select a host for creating a FusionManager VM, then click Next. 3 Select a host in the
management cluster. Bind
1 the VM to the host to prevent
the active and standby
3. Set the VM specifications and QoS parameters based on the planned data. 3 Shows the recommended
1 values for a system with less
2 than 200 VMs. If more than
200 VMs are to be deployed,
Fixed value see FusionManager Software
Installation Guide.
3
4 Quality of service (QoS)
parameters ensure that
4
FusionManager VMs have
optimal performance and that
they release resources to
other VMs when their loads
5 are light.
Set it to 1/3 of the
maximum value.
Fixed value
6
4. Use the default values for other advanced attributes and click Next. -
23
Step 1: Create a FusionManager VM
5. Set the VM NIC and disk attributes, then click Next. Select a planned data store
for FusionManager and set
its capacity to 280 GB.
1 2
6. Confirm the information, select Start the VM after creation, and click Finish. You can click Query Task to
view the creation progress.
3. Wait for a few seconds, then click OK to go to the OS page. If the connection is -
closed, click OK to reconnect. Do not close or refresh the VNC window during the
mounting process.
4. Within 30 seconds, select Install and press Enter. The automatic installation
starts, taking about 40
minutes. The installation is
complete if a login message
is displayed.
24
6.2 Configure FusionManager
Host Installation — ISO Method
Step 1: Configure the Active FusionManager VM
1. Enter the username and password to log in to the active FusionManager VM using Password for user
VNC. galaxmanager:
GalaxManager_2012
Enter
3. Set the network information as prompted based on the planned data. For example: Information to be set:
Names of active and
standby FusionManager VMs
(H2)
Management IP addresses
and subnet mask of the two
VMs (H4)
Enter
Management plane gateway
address (H6)
Standby FusionManager VM
IP address (H4)
Floating IP address (H5)
Arbitration IP address (H7)
25
Step 1: Configure the Active FusionManager VM
Host Installation — ISO Method Only perform this operation
5. Run the following command to initialize the FusionManager components:
on the active node.
sh /opt/UHM/Runtime/bin/uhmConfBeforeRun.sh
6. Run the following commands to install the hardware adapter to allow For details about the
supported devices, see the
FusionManager to manage more physical devices: hardware compatibility table
cd /opt/UHM/Runtime/LegoRuntime/uhm/adaptor in the FusionManager
product description.
sh install_adaptor.sh The installation is complete if
"install adaptor OK" is
displayed.
Enter
3. Set the network information as prompted based on the planned data. For example: NOTE
local node name and
physical IP on local node
specify the name and IP
address of the current VM,
respectively. Enter the
Enter information about the
standby FusionManager VM
based on the data plan.
26
Step 1: Log in to FusionManager
Host Installation — ISO Method Default password:
1. Enter the FusionManager floating IP address in the address bar of the web
Admin@123
browser, then log in to the FusionManager as user admin. The system asks you to
change the password upon
your first login.
2. Import the phyDevAccessTemplate_en.xlsm template you filled in. If you did not obtain the
template during data
planning, click Download
Template to obtain it and fill
in the information about the
devices to be added.
2 4
3 5
27
Step 1: Add a Hypervisor
Host Installation — ISO Method
2. Set the hypervisor parameters, then click Add. IP address: VRM floating IP
address (D2)
Port: 7443
Username: gmsysman
Password: GMEnginE@123
2. Configure DST. -
1
28
Step 2: Check the System Time
Host Installation — ISO Method -
1. Check the system time.
Offset Between System Time and Standard Local Time > 1 Minute Go to Step 3
Offset Between System Time and Standard Local Time ≤ 1 Minute Go to Step 4
1 galaxmanager:
GalaxManager_2012
2
3
4
4
For details about time
synchronization, see
Appendix 4.
29
8 Configure Alarm Reporting
1. Use PuTTY to log in to the VRM through the management IP address of the active Configure FusionCompute to
VRM (D1). report alarms to
FusionManager for
centralized management.
2. Log in as user gandalf, switch to user root, and run the TMOUT=0 command to -
disable logout on timeout:
login as: gandalf
gandalf@VRM01:~> su - root
Password: //Default password: Galax@8800
VRM01:~ # TMOUT=0
VRM01:~ #
For example:
VRM01:~# sed -i '/.*\bssoserver\b/d' /etc/hosts
VRM01:~# echo -e "188.100.200.60\tssoserver" >> /etc/hosts
VRM01:~# sh /opt/galax/vrm/om/fms/bin/modGMIp.sh 188.100.200.60
modify GM IP Success.........
4. Configure the alarm reporting function on the standby VRM. For details, see 1 to 3. -
30
Appendix 1 Principles of VM Network Access
A virtual NIC of a VM communicates with an external network by connecting to the DVS through the port group,
then by connecting to the physical NIC of a host through the DVS uplink. These connections are shown in the
following figure.
Host 1 Host 2
VM
Port Group
DVS
Virtual
Resources Uplink
Physical
Resources
Physical Network
Network Description
Element (NE)
Port group A port group is a virtual logical port similar to a template with network attributes. A port group is
used to define VM NIC attributes and uses a DVS to connect to the network:
• Subnet: FusionCompute automatically allocates an IP address in the subnet IP address pool
to each NIC on VMs that use the port group.
• VLAN: Users must manually assign IP addresses to VM NICs. VMs connect to the VLAN
defined by the port group.
DVS A DVS is similar to a switch used for communications on the layer 2 network. A DVS links the
port group to the VM and connects to the physical network through the uplink.
Uplink An uplink connects the DVS to the physical network. An uplink is used for VM upstream data
transmission.
Logical layer:
After a storage device is Host IP SAN device
associated with a host, the
host bus adapter (HBA) LUN01
generates a WWN for the
WWN
host. LUN02
Initiator
If an initiator is added to
the logical host on the
...
storage device using the
Controller
host WWN, the host can
HBA
access the storage
resources (LUNs) that
match the logical host on Logical host
the storage device and add
the LUNs as data stores.
Logical association
NTP
After the NTP clock clock source
source is configured, all
hosts, VRM VMs, and
FusionManager VMs FusionManager
synchronize time from the
NTP clock source to
ensure system time
VRM
accuracy.
32
Appendix 5 Domain0 Configuration
1. Select Dom0 setting, then set the Domain 0 parameters. For details about
configurations for Domain 0,
see Deploying the
Virtualization Platform
(Non-FusionCube) in the
FusionCloud Desktop
2
V100R005C00 Software
1
Deployment Instruction.
2. Run the following command to set the multipathing mode to Huawei mode: -
sh /opt/uvp/multipath/change_multipath_mode.sh 1
3. Run reboot to restart the host for the mode to take effect: -
1
2
Right-click the
management plane 3
DVS.
2. Set the start ID and end ID of the VLAN pool for the service plane on the displayed -
dialog box.
33