Redhat EX310 Exam
Redhat EX310 Exam
Redhat EX310 Exam
Question: 1
You are working in an OpenStack environment using OpenVSwitch with Neutron, and you need
to quickly identify existing bridges. What command can you run to locate bridges?
Choose the correct answer:
A. openvswitch report
B. ovs-vsctl list-ifaces
C. ovs-vsctl list-br
D. ip netns
Answer: C
Question: 2
Linux bridges can be created persistently by ______ or temporarily using _____.
Choose the correct answer:
Answer: B
Question: 3
What command can you run to list database contents in a Linux environment running
OpenVSwitch?
Choose the correct answer:
A. ovs show-databases
C. ovs-vsctl show
D. ovs-vsctl dbdump
Answer: C
Explanation:
The command line client for OpenVSwitch can be used to view, create, manage and delete
bridges created by OpenStack Neutron.
Question: 4
Linux ________ are a kernel feature the networking service uses to support multiple isolated
layer-2 networks with overlapping IP address ranges.
Choose the correct answer:
A. network namespaces
B. IP addresses
C. bridges
D. virtual NICs
Answer: A
Question: 5
_____ is the process of transparently connecting two networks segments together so that packets
can pass between the two as if they were a single logical network.
Choose the correct answer:
A. Carrying
B. networking
C. bridging
D. Datalinking
Answer: C
Question: 6
What command do you run to list all network namespaces in an environment?
Choose the correct answer:
A. netns ls
B. brutal show
C. ip netns
D. route -ns
Answer: C
Explanation:
The command "ip netns" will list network namespaces for virtual routers, DHCP spaces for
virtual machines, & LBaaS instances in OpenStack environments.
Question: 7
Where do network startup scripts live on a Linux server?
Choose the correct answer:
A. /boot/networks
B. ~/.net-config
C. /etc/sysconfig/network-scripts/
D. /etc/network/
Answer: C
Explanation:
Startup scripts for networks are located under /etc/sysconfig/network-scripts/.
Question: 8
You have been tasked with integrating OpenStack Nova, Cinder volumes, and Glance with Ceph.
Your datacenter will have 25 servers dedicated to each service, and your employers have
requested 3 replicas per pool. This company wants 50% of resources dedicated to Cinder
Volumes, 30% to VMs, and 20% to Glance. Using the Ceph PGCalc (http://ceph.com/pgcalc/),
calculate the placement groups count for each pool.
Answer: D
Question: 9
You have been asked to create a new OSD pool named cinder-backups on a node named `node4`
with a PG size of 128. What is the correct command syntax?
Choose the correct answer:
Answer: C
Explanation:
Run ceph --help if you ever need a reminder on how to create an OSD pool.
Question: 10
How many OSDs can exist per disk?
Choose the correct answer:
A. 3
B. 1
C. 8
D. The number of OSDs per disk can be adjusted by the administrator in ceph.conf
C. Ceph Tentacles
Answer: B
Explanation:
The Ceph charter advisory board was formed in 2015 after interest in using Ceph as a storage
backend increased exponentially, as demonstrated in the fall 2015 OpenStack User Survey.
OpenStack Controller:
Solution:
OpenStack Controller:
4. As admin, upload the image to Glance image service using OpenStack CLI. Make sure the
image is visible to all tenants in your environment, and that the container and disk format are
appropriate for the image format.
openstack image create CentOS-7-x86_64-GenericCloud --file CentOS-7-x86_64-GenericCloud-1503.qcow2 --d
isk-format raw --container-format bare --public
6. List images stored in RBD, paying special mind to the UUID returned.
sudo rbd ls images
7. Print out image details on ceph admin node using the UUID returned in the previous step.
sudo rbd info images/$UUID
OpenStack Controller:
Solution:
OpenStack Controller:
6. Create a new m1.small fedora-atomic server attached to the Public network named
"cephvm-2" with the OpenStack CLI.
openstack server create cephvm-2 --flavor m1.small --image fedora-atomic --nic net-id=$PUBLIC_UUID
Solution:
3. Using the OpenStack CLI, create a new, 1GB Cinder volume named "ceph02" with the ceph
RBD backend.
openstack volume create --type ceph --display-name="ceph02" 1
4. After volume build completes, attach the volume to "cephvm-2", the server created in the
previous lab.