Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
356 views33 pages

FusionSphere V100R003C00 Quick Installation and Configuration Guide 01 PDF

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 33

FusionSphere

V100R003C00
Quick Installation and Configuration
Issue 01
Date: 2013-08-15

FusionSphere

FusionSphere virtualizes the hardware


resources of a physical server by
deploying virtualization software on the
physical server. This allows the physical
server to have the functionality of multiple
virtual servers and run multiple virtual
machines (VMs).
FusionCompute

FusionSphere decreases the service


FusionManager
online time, increases the usage of server
hardware resources, simplifies
maintenance efforts, and lowers
maintenance costs. FusionManager
Client

"Quick" Documentation Series


The FusionSphere V100R003C00 Quick series of documentation, including the Quick Installation and
Configuration Guide and the Quick Service Provisioning Guide, instructs engineers on how to rapidly
complete project implementation by focusing on common installation, deployment, and service provisioning
scenarios.
In this document, you will learn how to quickly install and configure FusionSphere. For details, see
FusionSphere Software Commissioning Instructions.
Contact Information
Technical • Official Website: http://support.huawei.com/enterprise/
Support • Documentation: http://support.huawei.com/enterprise/productsupport
• Forum: http://support.huawei.com/ecommunity/
• Local Offices: http://enterprise.huawei.com/cn/about/contact/all/index.htm
• Service Hotline: 400-822-9999
• Email: support_e@huawei.com

Feedback Send us your thoughts, suggestions, or any other feedback using the technical
support contact information listed above.

Copyright © Huawei Technologies Co., Ltd. 2013.


All rights reserved.
1 Introduction
The following figure shows components in the FusionSphere solution.

Management software Virtualization software


(FusionManager) for hosts
Virtualization
resource
management node
(VRM)
Sever ...
(host)

Switch Coverage of this


document

Storage devices ...

Component Description
Server A physical server. After the virtualization software is installed on a server, the server
functions as a host in the FusionCompute system, providing virtualized computing
resources, including virtual CPUs (VCPUs), memory, and network ports.
Storage device Connects to a host through network devices, provides virtual storage resources to VMs.

Switch A network device that connects a server and a storage device. It also connects to the
external network, thereby implementing internal and external communications for the
FusionSphere system.
FusionCompute A component of the virtualization software. FusionCompute consists of two hosts and the
virtualization resource management module, that is, active and standby virtualization
resource management (VRM) VMs. FusionCompute virtualizes physical resources and
creates and manages VMs.
FusionManager FusionSphere management software consisting of two FusionManager VMs (active and
standby). It manages and maintains virtual resources, physical resources, and services
in a centralized manner.

2 FusionSphere Installation Preparations

Check installation conditions Obtain software Obtain data Learn


Learn installation process

installation
Prerequisites process
 You have obtained the integration design plan.
 All hardware has been installed and is working properly.
 You have configured network and storage devices based on the design plan.

2
Servers used in the FusionSphere system must meet the following requirements.

Item Requirement
Intel or AMD 64-bit CPU with virtualization function enabled
Hardware Memory ≥ 8 GB
Single disk space ≥ 360 GB
Use RAID 1 that consists of hard disks 1 and 2 to install an OS and software on
RAID
the host, thereby enhancing host reliability.

Boot device The first boot device for the host is set to Hard disk.

The local PC used for FusionSphere installation must meet the following requirements.

Item Requirement
CPU: 32-bit; memory ≥ 2 GB
Hardware
System disk free space ≥ 1GB; User disk free space ≥ 30 GB

Operating system 32-bit Windows XP or Windows 7


(OS) The Windows firewall is disabled.

Communication between the local PC and the management and host BMC planes
Network
is normal.

Check installation conditions Obtain software Obtain data Learn


Learn installation process
installation
Obtain the following software files for FusionCompute installation.
process
Item Software Package How to Obtain

FusionCompute
FusionCompute host OS Visit http://support.huawei.com
V100R003C00SPC300_CNA.iso
and choose Enterprise >
FusionCompute installation FusionCompute Software > IT > FusionCloud >
wizard V100R003C00SPC300_Tools.zip FusionSphere >
FusionCompute >
FusionCompute
VRM VM template file V100R003C00SPC300
V100R003C00SPC300_VRM.zip

Remote access tool PuTTY Visit http://www.putty.org.

Obtain the following software files for FusionManager installation.

Item Software Package How to Obtain

FusionManager installation FusionManager Visit http://support.huawei.com


file V100R003C00SPC300_GMN_FS.iso and go to
Enterprise > Software > IT >
FusionManager FusionCloud > FusionSphere >
Tool for importing physical
V100R003C00SPC300_PhyDevAcces FusionManager >
device information in batches
sTemplate.zip V100R003C00SPC300

3
Check installation conditions Obtain software Obtain data
Learn
Learn installation process

installation
process
Before installation, finish the integration design plan and obtain all data required for installation by the plan.
The following figures show a sample networking plan. The table on the next page lists sample data for this
plan.

Management Service
cluster FusionManager floating IP address cluster
Active Standby
FusionManager FusionManager

Active
VRM Standby
VRM
VRM floating IP address

Management plane
Host
Storage plane
Service plane
eth0 (management NIC 1)
BMC plane

eth1 (management NIC 2)


Controller A
4 storage links

eth2 (storage NIC 1) Management link

Management link
eth3 (storage NIC 2)
4 storage links
Controller B
eth4 (service NIC 1)

BMC eth5 (service NIC 2)


Local PC

4
NOTE: If a parameter SN (such as A4) is mentioned in a procedure below, use the planned value for this
parameter.

Type Parameter Example and Planned Value

(A1) IP address of the host BMC 188.100.200.71


Used to access the host BMC. Planned value: _________________

(A2) Host BMC username and password Username: root Planned value: ____
Used to log in to the host BMC. Password: root ____

(A3) Host management plane ports


During host OS installation, set a management plane port eth0 and eth1
for the host. If multiple ports are deployed, bind a second Planned value: _________________
port to the first port in the configuration phase.
A. Host
IP address: 188.100.200.21
information Subnet mask: 255.255.255.0
(A3) Host management plane port parameters Gateway: 188.100.200.1
The parameters are required during host OS installation. Planned value: __________________
__________________
__________________

(A5) Host name Host01


Used to identify the host. Planned value: _________________

(A6) Password of user root huawei_123


Used to log in to the host. Planned value: _________________

B. Service (B1) Name


A management cluster is automatically created after VRM ServiceCluster
cluster
installation. Create service clusters for users during the Planned value: _________________
information configuration phase.

(C1) IP address 188.100.4.241


Used to communicate with the host. Planned value: _________________
C. Local PC
Username: administrator
information (C2) User information
Password: 123456
PC OS user account, which is required when you obtain the
Planned value: _________________
VRM VM template file from the PC.
_________________

188.100.200.31 (active)/
(D1) Management IP addresses
188.100.200.32 (standby)
Management IP addresses of the active and standby VRMs.
Planned value:
D. VRM node Required during the VRM installation phase.
_______________ / ______________
information
(D2) Floating IP address
188.100.200.30
Management plane floating IP address of the active and
Planned value: __________________
standby VRMs. Required during the VRM installation phase.

(E1) Storage device name IPSAN01


Required when adding a storage device to a site. Planned value: __________________

(E2) Storage device management IP address


188.100.200.203
Used to identify the device and access the management
Planned value: __________________
software of the device.
E. Storage
172.20.100.100 VLAN ID: 4
device 172.30.100.100 VLAN ID: 5
information 172.40.100.100 VLAN ID: 6
(E3) Storage device network parameters
172.50.100.100 VLAN ID: 7
Storage IP address and VLAN ID of a storage device.
Planned value:
If multipathing mode is selected, obtain all IP addresses
_______________ VLAN ID: _______
and VLAN IDs of the storage plane.
_______________ VLAN ID: _______
_______________ VLAN ID: _______
_______________ VLAN ID: _______

5
Type Parameter Example and Planned Value

172.20.X.X – VLAN 5 – 255.255.255.0


172.30.X.X – VLAN 6 – 255.255.255.0
172.40.X.X – VLAN 7 – 255.255.255.0
(E4) Storage plane network parameters 172.50.X.X – VLAN 8 – 255.255.255.0
All network segments, VLAN IDs, and subnet Planned value:
masks of the storage plane. _________ – VLAN___ – ____________
_________ – VLAN___ – ____________
_________ – VLAN___ – ____________
_________ – VLAN___ – ____________
E. Storage device
information StoragePort1 172.20.123.123
StoragePort2 172.30.123.123
(E5) Host storage ports StoragePort3 172.40.123.123
Set host storage port names and IP addresses StoragePort4 172.50.123.123
to communicate with storage devices. Planned value:
The IP address must be set to an available IP ___________ ________________
address on the storage plane. ___________ ________________
___________ ________________
___________ ________________

F. Clock source 188.100.200.81


(F1) NTP server IP address
information Planned value: __________________

ServiceDVS
(G1) Name
Planned value: __________________

G. Service plane (G2) Port


eth4
Serves as the unique upstream link of a
DVS Planned value: __________________
distributed virtual switch (DVS).

97-99
(G3) VLAN pool
Planned value: __________________

(H1) Names of the hosts where the Host01/Host02


FusionManager VMs are located Planned value: __________________

(H2) Names the active and standby FM01/FM02


FusionManager VMs Planned value: __________________

(H3) Data stores used by the active and Datastore001/Datastore002


standby FusionManager VMs Planned value: __________________

188.100.200.91 (active)/188.100.200.92
(H4) Management IP addresses and subnet
(standby)
mask of the active and standby
255.255.255.0
FusionManager VMs
Planned value: __________________
H. FusionManager 188.100.200.90
(H5) FusionManager floating IP address
node information Planned value: __________________

188.100.200.1
(H6) Management plane gateway address
Planned value: __________________

(H7) Arbitration IP address (Default value: 188.100.200.1


management plane gateway address) Planned value: __________________

188.100.200.81
(H8) NTP clock source IP address
Planned value: __________________

Decompress FusionManager
V100R003C00SPC200_PhyDevAccessTem
(H9) Physical device information
plate.zip and enter the required data in
phyDevAccessTemplate_zh.

6
Check installation conditions Obtain software Obtain data Learn
Learn installation process

installation
process
Procedure
FusionSphere installation includes the installation and configuration of FusionCompute and FusionManager.
The following figure shows the procedure for installing FusionSphere.

FusionSphere installation procedure


Start

Install FusionCompute.

Configure
FusionCompute.

Install FusionManager.

Configure
FusionManager.

Configure the alarm


reporting function.

End

In the following table, the left column describes the operation guidelines and the right column provides the
operation background, principles, and other descriptions. The configuration process includes the icons
pictured below. It is recommended to familiarize yourself with the meanings of the icons before proceeding.

Example:
Icon Description
1
1 2 3 Operation sequence 2

Transition between different


sections of the interface
Step 1: Click VDC Management.
 Skip to step Step 2: Click Network Management.

7
Host Installation — ISO
Note: The screenshots in the steps below include sample parameter values, which may vary
from the actual parameter values onsite.

3 Install FusionCompute

Host
3.1 Installation
Install an OS on the— ISO
Host

Step 1: Mount an ISO File to the VM


1. Enable communication between the local PC and the host where FusionCompute Connect the PC and the host
is to be installed. to the same switch.

2. Open the web browser on the PC, then enter the BMC IP address into the address Huawei RH2285 is used as
bar to access the host BMC. an example host.

3. Log in to the BMC as an administrator. The administrator account is


root.

4. Open the Remote Control window. You may need to click


Remote Virtual Console
(requiring JRE) on the
Remote Control page of
some Huawei server BMCs.

5. Mount an ISO image to the VM to install an OS on the VM. 3 Select FusionCompute


V100R003C00SPC300_CNA
1
.iso.

4 4 When the ISO file is


2 mounted successfully,
3 Connect turns to
Disconnect.

Step 2: Load a Host OS Installation File


1. Restart the host after the ISO file is mounted. Note
The host BIOS password
may be required during the
1 restart. Obtain the password
When the host begins to
in advance.
restart, press F11
3 repeatedly until the screen 2
for selecting boot device is
displayed.

8
Step 2: Load a Host OS Installation File
2. Set DVD-ROM as the boot device. NOTE
The DVD-ROM name and the
configuration screen may vary
depending on the server used.

3. Select Install. The host loading begins,


which takes about 2 minutes.

Step 3: Configure Host Information


1. Select Network, then select a management plane port, for example, eth0. TIP
When configuring host data
on the configuration screen:
1 2 Press Tab or the up and
down arrow keys to move
the cursor.
Press Enter to expand or
execute the highlighted item.
Press the space bar to
toggle between options.

2. Configure the IP address and subnet mask for eth0 (A4). -

2
Use the
3 number pad
on the
keyboard to
enter numbers.

3. Configure the gateway for eth0 (A4) on the Network Configuration page. -

9
Step 3: Configure Host Information
4. Select Hostname, then set the host name. -

2
1
3

5. Select Timezone, then set the local time zone and time. -

1
3

6. Select Password, then set the password for user root (A6). NOTE
Enter the password carefully. The password must contain
at least:
• Eight characters
• One space or one of the
following special characters:
2 `~!@#$%^&*()-_=
+\|[{}];:'",<.>/?
1 • At least two of the following:
3
lowercase letters,
uppercase letters, and
numbers.

7. Configure Domain 0 if the FusionCompute works with the FusionAccess. For details, see Appendix 5.

Step 4: Install an OS on the Host


1. Select OK in the bottom right corner of the screen, then select Yes in the displayed 2 Ensure that the OS is
dialog box to install the OS on the host. installed on the default disk
partition.
1
3 Confirm the configuration,
then select Yes to install the
OS.
The installation takes about
10 minutes.
After the installation is
2 3 complete, the host restarts.

2. Repeat the above procedure to install OSs on additional hosts.


When the OS installation progress bar for a host is displayed, you can begin installing
an OS on another host.

3. If in the date plan, the two VRM hosts are used to create user VMs and the user TIP
VMs use Huawei SAN storage devices, set the multipathing mode to Huawei mode After OS installation on the two
by running commands after the OSs are installed on the two hosts. For details, see hosts is complete, you can
Appendix 6. install VRMs on the two hosts.
During VRM installation, you
You can set the multipathing mode for non-VRM hosts on the management web can install OSs on other hosts.
client.

10
3.2 Install the VRM
Step 1: Decompress the Software Packages
1. Decompress FusionCompute V100R003C00SPC200_Tools.zip onto the PC to -
obtain the Installer, ModCNABFileTool, PlugIn, and ToDPS folders.

2. Decompress FusionCompute V100R003C00SPC200_VRM.zip into a new folder The path of this folder cannot
to obtain two VRM VM template files whose file name extensions are .xml and .vhd, contain Chinese characters.
respectively. Share the folder to the current PC user. After sharing the folder, the
host can access to the VRM
VM template files.

Step 2: Check the System Environment


1. Run FusionCompute_EnvCheck.bat in the PlugIn folder to check whether the -
PC has .Net 4.0 and JRE installed and whether the file sharing service is enabled. If
not, the system will install the plugins and configure the service. Check the output
information to confirm.
If "Press any key to continue..." is displayed, the check is complete and the PC
meets the requirements.

Step 3: Initialize the Installation Wizard


1. Run FusionComputeInstaller.exe in the Installer folder. The installation wizard is
opened.

2. Select a language. -

3. Select an installation mode. The system begins to install


components, which takes
about 30 seconds.

4. Click Next after the components are installed. The Configure Host page is
displayed.

Step 4: Configure the Host


1. Configure the host where the active VRM VM is deployed, then click Next. 1 Enter the management IP
address you set during host
installation (A4).
2 Enter the password you
1 set during host installation
(A6).
Note
If the message "The host is
2 unreachable" is displayed,
ensure that the IP address
3 set in 1 is correct and can
be pinged from the local PC.

11
Step 5: Configure a Data Store
1. Select Local disk/FC SAN, then click Next. You are advised to use local
disks to install VRMs.
2. After you refresh storage devices, click Add to add a data store. If you select local disks, the
active and standby VRM VMs
will use disks of different
hosts. Local disks are
independent from the network,
and therefore are highly
reliable.

3. Add a data store. 1 You are advised to select


disks using RAID 1 to
enhance reliability.
The name of a disk using
3
2 RAID 1 is twice as long as
You are advised to that of other disks.
not check this box.
3 Enable virtualization:
Do not enable virtualization
during VRM installation.
1
Virtualized local storage
supports thin provisioning and
snapshots, but provides lower
I/O performance and disk
creation speed.
For details about VM storage
access, see Appendix 2.

Step 6: Configure VRM VM Information

1. Configure VRM VM template information. 1 Select


FusionCompute_V100R003
C00SPC200_VRM.xml in the
shared folder.
2 Enter information about the
1
current PC user (C2).
2

12
Step 6: Configure VRM VM Information

2. Set the management IP address of the VRM (D1). -

3. Set VRM VM specifications. You can set the


specifications based on the
deployment scale or
customize the specifications.

VM to the right of By
deployment scale refer to the
VM to be created; PM refers
to the host deployed.

4. Click Verify Configuration and click Next after the verification succeeds. The configuration page for
the standby VRM is
displayed.

Step 7: Configure the Standby VRM


1. Configure the standby VRM. For details, see Step 4 to Step 6. The Configure
The configuration of the standby VRM is consistent with that of the active VRM. Active/Standby Parameters
Caution: Do not deploy the active and standby VRMs on the same host. page is displayed.

Step 8: Configure Active/Standby Parameters


1. Set the management plane floating IP address of the active and standby VRMs The parameters are
(D2). automatically displayed.
Ensure that they are
consistent with the planned
data.

2. Click Verify Configuration and click Next after the verification succeeds. -

Step 9: Install the VRMs


1. Click Next to start installation. The installation takes about
50 minutes.
During VRM installation, you
can install OSs on other
hosts.

13
Step 9: Install the VRMs
2. Click the link on the displayed page and log in to FusionCompute. FusionCompute Login:
Enter the VRM floating IP
1 address in the browser.
Default username: admin
Password: Admin@123

2 Log in to FusionCompute as prompted.

3. Click Finish to finish the installation.

4. Configure FusionCompute

Host
4.1. Installation
Create — ISO
a Service Cluster on a Site

1. Go to the page for creating a cluster. After the VRM installation, a


1 management cluster named
ManagementCluster
2 containing the two VRM hosts
is automatically created.

2. Create a service cluster. Obtain the cluster name


1 from the data plan.
(B1)

4.2 Add Hosts to Clusters

Step 1: Add Hosts

1. Go to the Host and Cluster page. Add all hosts other than the

1 two VRM hosts to clusters.

14
Step 1: Add Hosts
2. Go to the Add Host page. -
1

2
Right-click the 3
cluster.

3. Set the host parameters, then click OK to complete the adding. 1 Host information (A4 and
A5)
2 After you set the host
BMC information (A1 and
A2), you can power on or off
the host on FusionCompute.
1

Step 2: Bind Host Management Ports

1. Go to the page for binding network ports. If ports eth0 and eth1 are
both common ports on the
1 4 management plane of the
host, bind them in
5
active/standby mode to
improve network reliability.
2
6
3

2. Locate the row that contains Mgnt_Aggr, then open the Add Network Port dialog Mgnt_Aggr is a bound port
automatically created during
box. host OS installation that
contains the management
port eth0 you configured.
Select another management
port eth1 and bind it to eth0
1 as Mgnt-Aggr to improve
network reliability.
2

15
Step 2: Bind Host Management Ports

3. Select eth1, then click OK to bind it to eth0. -

4. Click OK on the Bind Network Port page. -

5. Bind eth0 and eth1 on the active and standby VRM hosts. For details, see 1 to 4. -

4.3 Add Storage Resources to a Site (SAN Storage)

1. Open the Add Storage Resource dialog box. For details about network
1 communication between
hosts and storage devices,
see Appendix 3.
2

2. Set the storage device parameters (E1 to E3), then click Add. 3 The management IP
*Note: The values displayed in address specifies the storage
this figure are sample values. device.
1
4 5 A storage area
2
network (SAN) device
3 connects to a host through
5 4 multiple storage links. Enter
the information for all links at
the same time.

16
4.4 Add a Data Store to a Host (SAN Storage)

Step 1: Change the Host Multipathing Mode

1. Go to the Getting Started page of the host. The default multipathing


mode for FusionCompute is
1 Universal. If Huawei IP SAN
devices are deployed,
change the mode to Huawei
2 to improve storage
performance. If non-Huawei
storage devices are deployed,
skip this step.
3

4
5

2. Open the Advanced Settings dialog box. -

3. Select Huawei. Restart the host for the


settings to take effect. The
restart process takes about 3
minutes.

You can ping the IP address


1 of the host to check whether
the host has restarted. If the
host can be pinged, the host
has successfully restarted.

Step 2: Add Storage Ports to a Host


1. Go to the Add Storage Port page of the host. After storage ports are added
If you changed the multipathing mode of the host, perform this step after the host to the host, the host can
restarts. communicate with storage
1 devices.
For details about network
communication between
hosts and storage devices,
2 see Appendix 3.

17
Step 2: Add Storage Ports to a Host

2. Select the storage ports, then click Next. The page for connection
PORTX indicates network port ethX on the host. configuration is displayed.

3. Set the storage port parameters (E4 and E5), then click Next. The page for information
conformation is displayed.
1
The IP address is used to
communicate with storage
devices. Set it to an idle IP
address on the storage plane.

3
4

4. Confirm the information, then click Add. -

5. If multipathing mode is selected, click Continue and repeat 2 to 4 to add multiple -


storage ports to the host. Click Close on the Finish page after all ports are added.

Step 3: Associate Storage Resources with the Host

1. Go to the Associate Storage Resource page of the host. After a storage device is
associated with a host, the
1 host can discover all storage
resources on the device.
2 For details about network
communication between
hosts and storage devices,
3 see Appendix 3.

2. Select an added storage resource, then click Association. The storage resource is
associated with the host.
Click OK on the displayed dialog box. All associated storage
resources are displayed on
the Storage Resource page.

Step 4: Configure the Initiator

1. Record the world wide name (WWN) of the host displayed in the bottom Storage devices identify
hosts using the WWN.
right corner of the Configuration > Storage Resource page. For details about network
communication between
hosts and storage devices,
see Appendix 3.

18
Step 4: Configure the Initiator
2. Enter the storage device management IP address in the address bar of the NOTE
browser, then press Enter. Load the storage device management software.
The initiator can be only
configured using the storage
device management software.

3. Select a language, then click OK. Opens the management


software.

4. Click Discover Device. -

5. Enter the administrator username and password, then scan the storage device. 2 Storage IP address of the
storage device.

6. Enter logical host view, then click Initiator Configuration. 5 Select the logical host that
matches the logical unit
1 number (LUN). The selected
2 6 host in the figure is an
example.

3
5
4

19
Step 4: Configure the Initiator
7. Add the host WWN to the initiator. 2 Select the host WWN you
recorded in 1.
1

Step 5: Add a Data Store


1. Switch to FusionCompute and scan available storage devices of the host. A data store is used to create
1 user disks on VMs.
You can view the task
progress on the Task Tracing
2 page.
3 For details about network
communication between
hosts and storage devices,
see Appendix 3.

2. After the scan is complete, go to the Add Data Store page. -

2
3

3. Select a LUN on a SAN device, then add it to the host as a data store. 2 Virtualized or Not:
A virtualized data store
1 supports advanced features,
such as thin provisioning
disks, snapshots, and live
storage migration, and
provides high resource
2 utilization. However, disk
creation on it is slow.
If you set this parameter to
3 Yes, you can select whether
to format the data store.

20
Host Installation — ISO Method
4.5 Add Virtual Network Resources to a Site
NOTE: If the service plane and the management plane are deployed on the same network plane, no service DVS is
required. Add a service plane VLAN pool to the management plane DVS. For details, see Appendix 7.

Step 1: Create a DVS


1. Go to the Create DVS page. Create a DVS that connects
to the service plane to
1 provide network resources
for user VMs.
2 A DVS named
ManagementDVS and a port
group named
managePortgroup for the
management plane have
been automatically created
during VRM installation. The
port Mgnt_Aggr has been
added to the DVS uplink
3 group.
For details about VM network
access, see Appendix 1.

2. Set the DVS name (G1), then check the box next to Add uplink and Add VLAN If intelligent network interface
cards (iNICs) are used, set
pool. Switching type to Cut-through
1 mode.

3. Select a service port for each host on the Add Uplink page. An uplink is a physical port
on a host that connects to a
1
DVS. VMs connect to the
network through the physical
ports on the host.
Add all the hosts in a cluster
to the same uplink group on
a DVS. This allows VMs in
the cluster to have the most
migration options if a fault
occurs.

21
Step 1: Create a DVS

4. Set the DVS VLAN pool (G3) on the Add VLAN Pool page. A virtual local area network
(VLAN) pool defines the
1 allowed VLAN range for a
DVS.
During service provisioning,
create port groups on the
DVS for each service. This
allows each port group to use
2 VLANs from different VLAN
pools.
The VLAN pool set on the
DVS must be same as the
3 VLANs set on the physical
switch to which the host
connects.
4

5. Confirm the information, then click Create. -

4.6 Configure the NTP Clock Source and Time Zone


Step 1: Configure the Network Time Protocol (NTP) Clock Source
1. Go to the Time Synchronization page. After an NTP clock source is
set, the system time on all
1 nodes in FusionCompute is
the same.
2 If multiple NTP servers are to
be deployed, ensure that all
the NTP servers use the
same upper-layer clock
source to ensure time
consistency.
3 This configuration requires a
service restart, which takes
about 2 minutes. After the
restart, log in to
2. Set the NTP server information.
FusionCompute and continue
1 to the next step.
For details about time
synchronization, see
Appendix 4.
2

Step 2: Set the Time Zone


1. Set the time zone on the Time Zone page. -

3
1

22
6. Install FusionManager
Host Installation — ISO Method
6.1. Install FusionManager
Step 1: Create a FusionManager VM
1. Go to the page for creating VMs on FusionCompute. -
1
2

2: Select a host for creating a FusionManager VM, then click Next. 3 Select a host in the
management cluster. Bind
1 the VM to the host to prevent
the active and standby

2 FusionManager VMs from


being migrated to the same
host.
3

3. Set the VM specifications and QoS parameters based on the planned data. 3 Shows the recommended
1 values for a system with less
2 than 200 VMs. If more than
200 VMs are to be deployed,
Fixed value see FusionManager Software
Installation Guide.
3
4 Quality of service (QoS)
parameters ensure that
4
FusionManager VMs have
optimal performance and that
they release resources to
other VMs when their loads
5 are light.
Set it to 1/3 of the
maximum value.

Fixed value
6

4. Use the default values for other advanced attributes and click Next. -

23
Step 1: Create a FusionManager VM

5. Set the VM NIC and disk attributes, then click Next. Select a planned data store
for FusionManager and set
its capacity to 280 GB.

1 2

6. Confirm the information, select Start the VM after creation, and click Finish. You can click Query Task to
view the creation progress.

Step 2: Install an OS and Software


1. Log in to the FusionManager VM using Virtual Network Computing (VNC). Click OK when the security
warning is displayed.

2. Mount an ISO file to the VM in the VNC window. 3 Select the


FusionManager
1 V100R003C00SPC300_GM
N_FS.iso file.
2 3 After the drive is mounted, a
4 dialog box is displayed
indicating that the connection
5
is closed.

3. Wait for a few seconds, then click OK to go to the OS page. If the connection is -
closed, click OK to reconnect. Do not close or refresh the VNC window during the
mounting process.

4. Within 30 seconds, select Install and press Enter. The automatic installation
starts, taking about 40
minutes. The installation is
complete if a login message
is displayed.

Step 3: Install the Second FusionManager VM


1. Repeat Step 1 to Step 2 to create and install the second FusionManager VM. NOTE
Do not install the two
FusionManager VMs on the
same host.

Step 4: Unmount the ISO File


1. Unmount the ISO file from the VM in the VNC window. -
1

24
6.2 Configure FusionManager
Host Installation — ISO Method
Step 1: Configure the Active FusionManager VM
1. Enter the username and password to log in to the active FusionManager VM using Password for user
VNC. galaxmanager:
GalaxManager_2012

Password for user root:


Huawei@001

2. Run the gmninit command to configure the FusionManager VMs to active/standby -


mode.

Enter

3. Set the network information as prompted based on the planned data. For example: Information to be set:
 Names of active and
standby FusionManager VMs
(H2)
 Management IP addresses
and subnet mask of the two
VMs (H4)
Enter
Management plane gateway
address (H6)
Standby FusionManager VM
IP address (H4)
 Floating IP address (H5)
 Arbitration IP address (H7)

4. Confirm the configurations. NOTE


•If you enter the wrong
Enter information, press Ctrl+c to
exit the configuration and run
the gmninit command to
Enter reconfigure.
•If any configurations are
incorrect, run the sh
/opt/GalaxManager/bin/stop
ALL.sh command to stop the
process and run the gmninit
command to reconfigure.

25
Step 1: Configure the Active FusionManager VM
Host Installation — ISO Method Only perform this operation
5. Run the following command to initialize the FusionManager components:
on the active node.
sh /opt/UHM/Runtime/bin/uhmConfBeforeRun.sh

6. Run the following commands to install the hardware adapter to allow For details about the
supported devices, see the
FusionManager to manage more physical devices: hardware compatibility table
cd /opt/UHM/Runtime/LegoRuntime/uhm/adaptor in the FusionManager
product description.
sh install_adaptor.sh The installation is complete if
"install adaptor OK" is
displayed.

Step 2: Configure the Standby FusionManager VM


1. Enter the username and password to log in to the standby FusionManager VM -
using VNC.

2. Run the gmninit command to configure the FusionManager VMs to active/standby


mode.

Enter

3. Set the network information as prompted based on the planned data. For example: NOTE
local node name and
physical IP on local node
specify the name and IP
address of the current VM,
respectively. Enter the
Enter information about the
standby FusionManager VM
based on the data plan.

4. Confirm the configurations, then press Enter. -

5. Run the following commands to install the hardware adapter: -


cd /opt/UHM/Runtime/LegoRuntime/uhm/adaptor
sh install_adaptor.sh

7 Initial FusionManager Configuration

7.1 Add Physical Devices to FusionManager


Prerequisites
The Secure Shell (SSH) function has been enabled for the switch to be added.
The Simple Network Management Protocol (SNMP) service has been enabled for the storage devices to be added.

26
Step 1: Log in to FusionManager
Host Installation — ISO Method Default password:
1. Enter the FusionManager floating IP address in the address bar of the web
Admin@123
browser, then log in to the FusionManager as user admin. The system asks you to
change the password upon
your first login.

2. Read and accept the user license agreement. -

Step 2: Import Device Information to FusionManager


1. Go to the Import Physical Devices page. -

2. Import the phyDevAccessTemplate_en.xlsm template you filled in. If you did not obtain the
template during data
planning, click Download
Template to obtain it and fill
in the information about the
devices to be added.

When the task progress


reaches 100%, the template
is exported. The data center
and zone to which to the
devices belong are
1 2 automatically created.

7.2 Add a Hypervisor


Step 1: Add a Hypervisor
1. Go to the page for adding a hypervisor. FusionManager can use and
manage resources in the
added hypervisors.
1

2 4

3 5

27
Step 1: Add a Hypervisor
Host Installation — ISO Method
2. Set the hypervisor parameters, then click Add. IP address: VRM floating IP
address (D2)
Port: 7443
Username: gmsysman
Password: GMEnginE@123

Check the box next to


Update data immediately
after data saving.
GMEnginE@123

Step 2: Associate a Resource Cluster with a Zone

1. Go to the page for associating a resource cluster. Associate the detected


resource clusters to the
1
planned zone.

2. Select the resource cluster to be associated, then click OK. -

7.3 Configure the NTP Clock Source and Time Zone


Step 1: Configure the Time Zone
1. Set the local time zone. -

 Daylight Saving Time (DST) Not Required  Go to Step 2


 DST Required  Go to 2

2. Configure DST. -
1

28
Step 2: Check the System Time
Host Installation — ISO Method -
1. Check the system time.

 Offset Between System Time and Standard Local Time > 1 Minute  Go to Step 3
 Offset Between System Time and Standard Local Time ≤ 1 Minute  Go to Step 4

Step 3: Modify the System Time


1. Modify the system time. For example: Password for user

1 galaxmanager:
GalaxManager_2012

Password for user root:


Huawei@001

2
3
4

Step 4: Configure the NTP Clock Source


1. Go to the Time Synchronization page and configure the parameters. Set the time servers to the
planned external NTP clock
1 sources to ensure time
2
consistency.

3 Do not set the NTP clock


source as the host clock
source.

4
For details about time
synchronization, see
Appendix 4.

29
8 Configure Alarm Reporting

1. Use PuTTY to log in to the VRM through the management IP address of the active Configure FusionCompute to
VRM (D1). report alarms to
FusionManager for
centralized management.

2. Log in as user gandalf, switch to user root, and run the TMOUT=0 command to -
disable logout on timeout:
login as: gandalf

Authorized users only. All activity may be monitored and


reported.

gandalf@188.100.200.31's password: //Default password:


Pwd8800_magic$
Last login: Fri Jun 28 00:42:12 2013 from 182.168.17.5

Authorized users only. All activity may be monitored and


reported.

gandalf@VRM01:~> su - root
Password: //Default password: Galax@8800
VRM01:~ # TMOUT=0
VRM01:~ #

3. Run the following commands to configure FusionCompute to report alarms to -


FusionManager:

sed -i '/.*\bssoserver\b/d' /etc/hosts


echo -e "FusionManager floating IP address\tssoserver" >> /etc/hosts
sh /opt/galax/vrm/om/fms/bin/modGMIp.sh FusionManager floating IP address

For example:
VRM01:~# sed -i '/.*\bssoserver\b/d' /etc/hosts
VRM01:~# echo -e "188.100.200.60\tssoserver" >> /etc/hosts
VRM01:~# sh /opt/galax/vrm/om/fms/bin/modGMIp.sh 188.100.200.60
modify GM IP Success.........

4. Configure the alarm reporting function on the standby VRM. For details, see 1 to 3. -

30
Appendix 1 Principles of VM Network Access
A virtual NIC of a VM communicates with an external network by connecting to the DVS through the port group,
then by connecting to the physical NIC of a host through the DVS uplink. These connections are shown in the
following figure.
Host 1 Host 2

VM

Port Group

DVS
Virtual
Resources Uplink

Physical
Resources

Physical Network

Network Description
Element (NE)
Port group A port group is a virtual logical port similar to a template with network attributes. A port group is
used to define VM NIC attributes and uses a DVS to connect to the network:
• Subnet: FusionCompute automatically allocates an IP address in the subnet IP address pool
to each NIC on VMs that use the port group.
• VLAN: Users must manually assign IP addresses to VM NICs. VMs connect to the VLAN
defined by the port group.

DVS A DVS is similar to a switch used for communications on the layer 2 network. A DVS links the
port group to the VM and connects to the physical network through the uplink.

Uplink An uplink connects the DVS to the physical network. An uplink is used for VM upstream data
transmission.

Appendix 2 Principles of VM Storage Access


A data store is a logical storage unit
on FusionCompute. A data store is
used to universally define all types
of storage resources. Host Local
Disk
You can create a virtual disk on a VM
data store for use by a VM. LUN
Virtual
Disk
IP SAN Storage
Data Store

Host 31 Shared Storage


Appendix 3 Network Communication Between Hosts and IP SAN Devices

Logical layer:
After a storage device is Host IP SAN device
associated with a host, the
host bus adapter (HBA) LUN01
generates a WWN for the

WWN
host. LUN02

Initiator
If an initiator is added to
the logical host on the

...
storage device using the

Controller
host WWN, the host can

HBA
access the storage
resources (LUNs) that
match the logical host on Logical host
the storage device and add
the LUNs as data stores.
Logical association

Physical layer: Physical association


A host connects to an IP
SAN device through
multiple links.
Storage NICs on the host Switch
connect storage NICs on
the storage device
controller.

Appendix 4 Principle of Clock Synchronization

NTP
After the NTP clock clock source
source is configured, all
hosts, VRM VMs, and
FusionManager VMs FusionManager
synchronize time from the
NTP clock source to
ensure system time
VRM
accuracy.

Hosts in the management cluster Other hosts

32
Appendix 5 Domain0 Configuration
1. Select Dom0 setting, then set the Domain 0 parameters. For details about
configurations for Domain 0,
see Deploying the
Virtualization Platform
(Non-FusionCube) in the
FusionCloud Desktop
2
V100R005C00 Software
1
Deployment Instruction.

Appendix 6 Setting Multipathing Mode for a Host By Running Commands


1. After the OS is installed on the host and the host restarts, log in to the host as user The password of user root is
root in the remote control window, and run TMOUT=0 to disable logout on timeout: the password you set during
host OS installation.

2. Run the following command to set the multipathing mode to Huawei mode: -
sh /opt/uvp/multipath/change_multipath_mode.sh 1

The command is executed successfully if the message "Enable ultrapath


successfully" is displayed.

3. Run reboot to restart the host for the mode to take effect: -

Appendix 7 Adding a VLAN Pool to the Management Plane DVS


1. On the navigation tree on the Network Management page, select the DVS and -
then select Add VLAN Pool.

1
2
Right-click the
management plane 3
DVS.

2. Set the start ID and end ID of the VLAN pool for the service plane on the displayed -
dialog box.

33

You might also like