Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Deploying Openstack Lab On GCP-v3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Building our own lab on Google cloud

This is a procedure to install a two node OpenStack cluster on top of a Google cloud
compute machines. It is assumed that you have your GCP account already set up using a
credit card. GCP Free tier (trial account) is sufficient to create this deployment. The free tier
gives you a quota of 12 CPUs. Make sure that at least 8 CPUs (& 30 GB RAM) should be
available before starting this deployment.

Creating Google cloud free tier account –

• Go to cloud.google.com→SignIn→Go to Console

• Subscribe to a 3 month’s free tier subscription by providing your basic details &
credit card details. INR 1/- will be deducted & credited back by google.
• No auto deduction after 3 months until increased manually.
• Once completed then you will get access to Google cloud console and an activation
message at the top of the screen with available balance & days.

Installing the Google cloud SDK terminal –

• Go to https://cloud.google.com/sdk/docs/install & follow the instructions


• Download the installer & install in your PC
• Once installed – it will ask you some basic account & project information
o Provide the gmail account
o Provide the project name (Check in google cloud console)
o When asked for a default zone – Select NO
Deploying google compute machines as OpenStack infrastructure nodes:-

You can perform this deployment either using the SDK or the GCP cloudshell.

1. Create a Centos-7 image with nested virtualization. See the --licenses flag below

#gcloud compute images create nested-vm-image --source-image-project=centos-cloud --source-image-


family=centos-7 --licenses "https://www.googleapis.com/compute/v1/projects/vm-
options/global/licenses/enable-vmx"

2. Create a controller VM using the above image. Machine flavor is N1-standard-4 (4


CPUs & 15 GB RAM). Replace the --project with your project ID & --zone of your
choice. You can get your project_id from the command - #gcloud projects list
#gcloud compute instances create controller --zone us-east1-b --project winter-wonder-329711 --image nested-
vm-image --machine-type n1-standard-4 --boot-disk-size 60 --can-ip-forward --network default --tags http-
server,https-server,novnc,openstack-apis

3. Create a compute VM using the above image. Machine flavor is N1-standard-4 (4


CPUs & 15 GB RAM).
#gcloud compute instances create compute --zone us-east1-b --project winter-wonder-329711 --image nested-
vm-image --machine-type n1-standard-4 --boot-disk-size 60 --can-ip-forward --network default --tags http-
server,https-server,novnc,openstack-apis

4. Once all this is done then the VMs should be assigned internal & external IP
addresses & should look like below. Go to GCP Dashboard→Compute engine

Click the VM name & in the details section check that the following flags should be set. If
not, then Edit VM & check both of them & save

5. Go to GCP Dashboard-->VPC network-->External IP addresses. Reserve a static IP


address for both the machines, otherwise the external IP may change after a reboot.
Give any name & Reserve
6. If it is your first ever project on GCP then you need to enable the compute engine
API to take things forward. Go to GCP Dashboard-->API & Services & enable it

7. Now open 2 separate SSH sessions to the controller & compute VMs via GCP
dashboard→Compute engine

Gain root access in both windows by #sudo -i

Repeat steps 8 to 18 in both the VMs

8. Check the VT support using the below command.


#egrep --color 'vmx|svm' /proc/cpuinfo | wc -l

Output: (should not be zero)

4
9. Edit the SSH configuration & set the following parameters followed by an SSHD
restart to allow the external root login into the servers.

#sed -i s/'PasswordAuthentication no'/'PasswordAuthentication yes'/g /etc/ssh/sshd_config


#sed -i s/'PermitRootLogin no'/'PermitRootLogin yes'/g /etc/ssh/sshd_config
#systemctl restart sshd
#cat /etc/ssh/sshd_config | grep -iE 'permitrootlogin|passwordauthentication' | grep -v "#"
#systemctl status sshd

Check the status of the service it should be recently started

10. Disable the SELinux mode so that servers remain reachable after STOP/START
#setenforce 0

Edit the /etc/selinux/config file and set the SELINUX mode to "disabled"

#sed -i s/'SELINUX=enforcing'/'SELINUX=disabled'/g /etc/selinux/config


#cat /etc/selinux/config | grep SELINUX=

11. Change root password of both the machines. Set any password of your choice &
remember that
#passwd

Changing password for user root.


New password:
Retype new password:
passwd: all authentication tokens updated successfully.

12. Check the connectivity between controller & compute nodes via Ping. Replace the IP
address for your machines
[root@controller ~]# ping 10.142.0.6
PING 10.142.0.6 (10.142.0.6) 56(84) bytes of data.
64 bytes from 10.142.0.6: icmp_seq=1 ttl=64 time=1.22 ms
64 bytes from 10.142.0.6: icmp_seq=2 ttl=64 time=0.300 ms
64 bytes from 10.142.0.6: icmp_seq=3 ttl=64 time=0.228 ms

Now its time to login to both the machines via any external tool like putty, SecureCRT,
MobaXterm, etc. Give username=root & password that you just changed for root user.

13. Update System & install some python packages - Make sure your machine has
the latest version of CentOS 7 on the machine.
#yum -y update

#yum -y install https://kojipkgs.fedoraproject.org//packages/qpid-proton/0.28.0/1.el7/x86_64/qpid-proton-c-


0.28.0-1.el7.x86_64.rpm

#yum -y install https://kojipkgs.fedoraproject.org//packages/qpid-proton/0.28.0/1.el7/x86_64/python2-qpid-


proton-0.28.0-1.el7.x86_64.rpm
#yum -y install https://download-ib01.fedoraproject.org/pub/epel/7/aarch64/Packages/p/python2-pyngus-2.3.0-
1.el7.noarch.rpm

14. Download & install OpenStack repositories on the machines.


#yum install -y centos-release-openstack-train

15. Also, disable firewall and NetworkManager in the VM & enable network service
#systemctl disable firewalld
#systemctl stop firewalld
#systemctl disable NetworkManager
#systemctl stop NetworkManager
#systemctl enable network
#systemctl start network

16. Install Packstack Installer - Let us first install the Packstack Installer that provides an
easy way to install OpenStack on the system. Use the Yum command to install it.

#yum install -y openstack-packstack

17. Downgrade the leatherman package.

#yum -y downgrade leatherman

Check it should be version 1.3.0-9.el7 only

[root@openstack-1 ~]# yum list | grep leatherman


leatherman.x86_64 1.3.0-9.el7 @openstack-train

18. Install “tmux” application which will avoid the session disconnections of
terminal window

#yum install -y tmux

Step 19 to 21 to be executed in the controller machine only.

19. Generate OpenStack answer file using packstack installer.

#packstack --gen-answer-file=/root/answer.txt

20. Edit the answer file.


#vim answer.txt

Change the following parameters only. Do not change any other parameters.

# Skip the provision of Demo project


CONFIG_PROVISION_DEMO=n

# Change Admin Password - Used to Login to OpenStack Dashboard


CONFIG_KEYSTONE_ADMIN_PW=<any password of your choice>

# Specify 'y' to install OpenStack Orchestration (heat). ['y', 'n']


CONFIG_HEAT_INSTALL=y

# List the servers on which to install the Compute service. Please add both machine internal
IPs here
CONFIG_COMPUTE_HOSTS=10.142.0.5,10.142.0.6

21. Run the PackStack installer (from the tmux session) with the answer file we just
modified according to our requirement.

#tmux
#packstack --answer-file=/root/answer.txt

The installation of OpenStack will take around 40 mins. Take a break.

In case of a disconnection use the following tmux command to attach to the session
#tmux a -t 0 → to attach to the target session id 0
#tmux kill-session -t 0 → to kill the tmux session id 0

On completion, you should get a message something like this.

22. Now edit the following configuration file & put the External IP address of your
controller VM in the server alias list followed by an httpd restart

#vim /etc/httpd/conf.d/15-horizon_vhost.conf

#systemctl restart httpd

23. Now access the CLI in the controller’s root directory. Source the keystone_admin.rc
file & start using the #openstack commands

[root@openstack-1 ~]# ls -ltr


total 56
drwxr-xr-x. 4 root root 31 May 25 12:10 packstackca
-rw-------. 1 root root 362 May 25 12:14 keystonerc_admin
-rw-------. 1 root root 51746 May 25 13:55 answer.txt

[root@openstack-1 ~]# source keystonerc_admin

[root@openstack-1 ~(keystone_admin)]# openstack flavor list


To access the GUI i.e. the Horizon Dashboard –
Open any web browser with controller external IP. In my case it will be visible as -
http:// 34.139.86.196/dashboard

Put the username = admin & password as you set in the answer file during installation.
Whatever password is set will be visible in the rc file, as below -

[root@openstack-1 ~(keystone_admin)]# more keystonerc_admin


unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD='password'
You should also see 2 compute nodes as we configured

You can STOP the compute machine from the google dashboard after use. This will avoid
billing (even in the free trial). Whenever required please start the machine & after 5
mins the OpenStack dashboard will be reachable again.
"Enjoy practicing your OpenStack".

You might also like