How Tos, OpenStack

Three-Node OpenStack Juno Deployed on Public Cloud – Part 3

In part 2 of this series we have made all the required preparations for our 3 nodes and installed the Keystone and Glance services. In this last part of the series we’ll follow the required steps for Nova (compute), Neutron (network) and Horizon (GUI/dashboard) before we our OpenStack environment will be running.

Nova (Compute Service)

Nova installation requires steps done on both Controller and Compute nodes. We’ll follow the steps required on the Controller node (excluding the database related). Make sure you modify /etc/nova/nova.conf as required. To finalize installation you can save effort by using the following command in order to restart all Nova services:

cd /etc/init/; for i in $(ls nova-* | cut -d \. -f 1 | xargs); do sudo service $i restart; done

We’ll move to the Compute node. Configure /etc/nova/nova.conf on this node. In the [DEFAULT] section make sure that the Controller’s public IP address is configured as the following "novncproxy_base_url = http://Controller_Public_IP _Address:6080/vnc_auto.html" to allow VNC access to instances (VMs) from the public Internet.

Then, check if your machine support hardware acceleration by committing

egrep -c '(vmx|svm)' /proc/cpuinfo 

If what you get is zero don’t forget to modify /etc/nova/nova-compute.conf.

After all these steps are completed run the verify operation commands. If you have issues make sure first that NTP and pings are running properly between the node. You may find my error messages post helpful for additional troubleshooting.

Neutron (Network Service)

Hang in there. You are almost done. Do take a breath now as Neutron requires all nodes to be modified. Ready? Let’s start!

Follow the steps required to configure Neutron on the Controller node (excluding the database related). Make sure you modify /etc/neutron/neutron.conf and /etc/neutron/plugins/ml2/ml2_conf.ini. I assume that the changes required for nova.conf has been done already committed while configuring Nova.

We’ll move to the most important node in Neutron configuration – Network node. Make sure you run all the required steps for the Network node until you reach the “To configure the Open vSwitch (OVS) service” section . You’ll need to modify the following files:

/etc/neutron/neutron.conf
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/l3_agent.ini
/etc/neutron/dhcp_agent.ini
/etc/neutron/metadata_agent.ini

When you done modifying all the files above you are ready to configure OVS. If you’ll run ifconfig command you’ll see that br-int has been added to your network interfaces. Now it is the time to create our external OVS bridge. As a side note, we are using the ML2 plugin not the OVS one. The ML2 plugin uses OVS though.

Sequence of commands for neutron br-ex setup on networking node:

Adding the br-ex bridge

ovs-vsctl add-br br-ex

(Optional) Backing up our interface file

cp /etc/network/interfaces  /etc/network/interfaces_bk

Editing /etc/network/interfaces

vim /etc/network/interfaces

And finally

ovs-vsctl add-port br-ex eth0 && reboot

As an easy one command to have all Neutron agent restarted we can use

cd /etc/init/; for i in $(ls -1 neutron-* | cut -d \. -f 1); do sudo service $i restart; done

Before we are done we still have to make some modification to the Compute node. Follow the steps in the guide. You’ll need to modify /etc/neutron/neutron.conf and /etc/neutron/plugins/ml2/ml2_conf.ini. I assume that the changes required for nova.conf has been done already committed while configuring Nova.

It is time to verify what we did and to create an internal network and floating IP network. The former will enable connectivity between our tenant or project instances (VMs or containers) and the latter is needed to provide external accessibility to our instances. More about that in a future post. I’ve been granted with 10.114.129.16/28 as my Portable Private Subnet and used the following command to configure my external/floating ip network (10.114.129.18-19 are reserved so I couldn’t use it):

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.114.129.20,end=10.114.129.30 --disable-dhcp --gateway 10.114.129.17 10.114.129.16/28

For the internal/tenant network I added DNS server IP using the --dns-nameserver option.

neutron subnet-create demo-net --name demo-subnet --dns-nameserver 10.0.80.11 --gateway 192.168.1.1 192.168.1.0/24

Follow the verify connectivity step. In the above external network I sent ping to 10.114.129.20 which is the first IP available for allocation and is used by the router.

Almost done! Now it’s the fun part!

Horizon (Dashboard Service)

This is the easiest step so hang in there. Follow the Horizon guide step to modify the file /etc/openstack-dashboard/local_settings.py on the Controller.

Screen Shot 2015-03-23 at 8.51.53 PM

From here you may want to install some more services as need or you can simply start to spin up an instance. Hope you found it helpful!

Advertisements

One thought on “Three-Node OpenStack Juno Deployed on Public Cloud – Part 3

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s