mcwhirter.com.au
RSS Atom Add a new post titled:
Craige McWhirter
Attaching Multiple Network Interfaces and Floating IPs to OpenStack Instances with Neutron

There are a number of use cases where you may need to connect multiple floating IPs to existing OpenStack instances. However the functionality to do this is not exposed via the Horizon Dashboard. This is how I go about attaching multiple network interfaces and floating IPs to OpenStack instances with Neutron.

Assumptions:

Port Creation and Assignment

When you have your environment sourced appropriately, get a list of networks for this tenant:

% neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 85314baa-a022-4dd1-918c-a73c83c8cad6 | ext-net      | 9248bc58-6cfe-4ff8-b33e-286a60c96c6d 999.999.999.0/23 |
| ee31dc0e-e226-423d-a7fe-f564dc17614e | DemoTutorial | 5821de82-3843-46ce-a796-c801bf40fd4c 192.168.71.0/24  |
+--------------------------------------+--------------+-------------------------------------------------------+

We're interested in the non-external network. In this case "DemoTutorial". I normally set this to $OS_NET. Now we can create a new port on that network.

% export OS_NET=ee31dc0e-e226-423d-a7fe-f564dc17614e
% neutron port-create $OS_NET
Created a new port:
+-----------------------+---------------------------------------------------------------------------------------+
| Field                 | Value                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "af150a1e-067a-4641-89a4-24c5b6b8fe3b", "ip_address": "192.168.71.180"} |
| id                    | fd2f78df-cf78-4394-84eb-9e37ed1e5624                                                  |
| mac_address           | fa:54:6e:f2:ce:a9                                                                     |
| name                  |                                                                                       |
| network_id            | ee31dc0e-e226-423d-a7fe-f564dc17614e                                                  |
| security_groups       | b1240686-7ad9-4d29-a679-d219f76648ca                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | abcd639c50804cf3end71b92e6ced65e                                                      |
+-----------------------+---------------------------------------------------------------------------------------+

We now need to note the id or as I do, assign it to $PORT_ID. Next we fire up nova. I'm going to assume that you know either the instance name or ID and have assigned it to $INSTANCE.

% export PORT_ID=fd2f78df-cf78-4394-84eb-9e37ed1e5624
% export INSTANCE=3c7ae1b9-8111-4f15-9945-75e0af157ead
% nova interface-attach --port-id $PORT_ID $INSTANCE

You should now have successfully added a second network interface to your OpenStack instance. Let's double check that:

% nova show $INSTANCE | grep network
| DemoTutorial network                 | 192.168.71.180, 192.168.71.181

Great! Now you have two internal IP addresses, one for each port assigned to that tenant.

Assigning Floating IPs

You can now add floating IPs either via the Horizon Dashboard or via the neutron client. I'll cover how to do this via the CLI. Fire up neutron, locate the original port and assign it's UUID to $PORT_ID0:

% neutron port-list | grep 192.168.71.181
fa:46:7e:21:4f:f3 | {"subnet_id": "8f987932-48ee-4262-8b44-0c910512a387", "ip_address": "192.168.71.181"} |
% export PORT_ID0=8f987932-48ee-4262-8b44-0c910512a387

Then we get a list of available floating IPs and assign those to variables too:

% neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                   | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 390e4676-0e05-40c3-9012-e5d27eb85dbe |                  | 999.999.999.123     |         |
| 16f7ca27-1d11-4967-9f0c-04f578590b01 |                  | 999.999.999.124     |         |
| f983b10d-454c-4c19-8f65-d9b96c4d7aa6 |                  | 999.999.999.125     |         |
+--------------------------------------+------------------+---------------------+---------+
% export FIP0=16f7ca27-1d11-4967-9f0c-04f578590b01
% export FIP1=f983b10d-454c-4c19-8f65-d9b96c4d7aa6
% neutron floatingip-associate $FIP0 $PORT_ID
Associated floating IP 16f7ca27-1d11-4967-9f0c-04f578590b01
% neutron floatingip-associate $FIP1 $PORT_ID0
Associated floating IP f983b10d-454c-4c19-8f65-d9b96c4d7aa6

We can then verify this assignment:

% neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                   | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 390e4676-0e05-40c3-9012-e5d27eb85dbe |                  | 999.999.999.123     |         |
| 16f7ca27-1d11-4967-9f0c-04f578590b01 | 192.168.71.180   | 999.999.999.124     |         |
| f983b10d-454c-4c19-8f65-d9b96c4d7aa6 | 192.168.71.181   | 999.999.999.125     |         |
+--------------------------------------+------------------+---------------------+---------+

For good measure you can double check how Nova sees this assignment:'

% nova show $INSTANCE | grep network
| DemoTutorial network                 | 192.168.71.180, 192.168.71.181, 999.999.999.124, 999.999.999.125

You're done :-)

Craige McWhirter
A Little Vim Hack For Go

After LCA2015 I've starting playing with Go (I blame Sven Dowideit). If you already use VIM-YouCompleteMe) then you should be right for most things Go. However I tinker in a few languages and you'll never guess that they have different rules around style and formatting of code.

Go is one out for me requiring settings unique to Go among the languages I tinker in. I made the below changes to my ~/.vimrc to suit Go:

function! GoSettings()
    set tabstop=7
    set shiftwidth=7
    set noexpandtab
endfunction
autocmd BufNewFile,BufFilePre,BufRead *.go :call GoSettings()

Now when I edit a file with the .go extension, my Vim session will be formatting the file correctly from the start.

You can also configure Vim to run gofmt but I preferred this approach.

Craige McWhirter
Configuring CoreOS Toolbox to Use Debian

The toolbox command in CoreOS uses Fedora by default. If you'd rather it used Debian by default, you can add the following lines to .toolboxrc:

TOOLBOX_DOCKER_IMAGE=debian
TOOLBOX_DOCKER_TAG=jessie

When you next run toolbox, you should see it pull down the requested image.

$ toolbox
Pulling repository debian
835c4d274060: Download complete
511136ea3c5a: Download complete
16386e29a1f4: Download complete
Status: Downloaded newer image for debian:jessie
core-debian-jessie
Spawning container core-debian-jessie on /var/lib/toolbox/core-debian-jessie.
Press ^] three times within 1s to kill container.
root@myserver:~#

It's that simple.

Craige McWhirter
Managing KVM Console Logs for Nova

Update

It turns out that this DOES NOT work around bug 832507. Please do not use this thinking that it does.

There's a problem with console logs. That was a hard sentence to word. I wanted to say a number of versions of it but each made it sound like the problem was with either OpenStack, Nova, libvirt / qemu-kvm. I didn't particularly feel like pointing the finger as the solution appears to need to come from a number of directions...

So the problem is that it is entirely possible when running KVM hypervisors with OpenStack Nova to have the compute nodes disks fill up when instance(s) console logs get a little chatty.

It's not a desirable event and the source will catch you by surprise.

There's currently no way to manage these KVM console logs via either qemu-kvm or via OpenStack / Nova so I wrote manage_console_logs.sh (github) (bitbucket) to do this.

manage_console_logs.sh operates as follows:

  • Creates a lock file using flock to ensure that the script is not already running.
  • Checks the size of each console log
  • If it's greater than the nominated size it's truncated using tail.

That's it. A pretty straight forward method for ensuring your compute node disks do not fill up.

You should schedule this via cron at you desired frequency and add a monitoring check to ensure it's doing it's job as expected.

Craige McWhirter
An Unexpected Journey

Earlier this year I was braced for a hard and personally gruelling year. What I didn't expect however, was that after my return to Sydney that an old friend would reveal how she truly felt about me. It was a brave moment for her but fortunately for us both I'd harboured the same feelings toward her.

How was I to know,

That you would rise,

Like a burning angel in my eyes

As expected, this year has certainly lived up to and exceeded those difficult expectations to be undoubtedly the most challenging year of my life. However I've been fortunate to balance that by now having the most amazing woman by my side.

Fiona's love, support, advice and humour has been an unprecedented experience in my life. I've found a lover and a partner in crime with whom I've formed an indomitable team as we've had each others backs through some rather unbelievable trials.

Which brings me to Paris. We walked to Pont des Arts, the bridge across the Seine and added our padlock at the centre of the bridge, amongst the thousands of others and made a wish.

Then we kissed.

I asked Fiona what she wished for but was politely told it was a secret.

I said I would tell her what I wished for, then dropped to one knee and paused for long enough to read the unmistakeable expression of "What are you doing? Get up you idiot!" written across Fiona's face before I produced an engagement ring and asked Fiona to marry me.

Fiona's Engagement Ring

Fiona's Engagement Ring

Fiona said "yes!".

Before too long,

We'll be together and no one will tear us apart

Before too long,

The words will be spoken I know all the action by heart

Earlier in the night I'd slipped an engagement pendant into Fiona's pocket which she discovered and put around my neck before we celebrated with a meal opposite Notre Dame cathedral.

Craige's engagement pendant

Craige's engagement pendant

I still shake my head in disbelief at how two such independent people have found themselves in a place where they cannot imagine their life without the other. Yet that's where we are.

Our life going forward is going to complicated and challenging, however there will be an awful lot of love and we'll have each other's backs all the way.

Thank you Fiona, for bringing such love and light into my life.

I've found the one I've waited for

All this time I've loved you

And never known your face

All this time I've missed you

And searched this human race

Here is true peace

Here my heart knows calm

Safe in your soul

Bathed in your sighs

Want to stay right here

Until the end of time

Sometimes, dreams do come true.

Craige McWhirter
Deleting Root Volumes Attached to Non-Existent Instances

Let's say you've got an OpenStack build you're getting ready to go live with. Assume also that you're performing some, ahem, robustness testing to see what breaks and prevent as many surprises as possible prior to going into production. OpenStack controller servers are being rebooted all over the shop and during this background chaos, punters are still trying to launch instances with vary degrees of success.

Once everything has settled down, you may find that some lucky punters have deleted the unsuccessful instances but the volumes have been left behind. This isn't initially obvious from the cinder CLI without cross checking with nova:

$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | B
ootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
| 3e56985c-541c-4bdd-b437-16b3d96e9932 | in-use    |              |  3   |    block    |
 true   | 6e06aa0f-efa7-4730-86df-b32b47e53316 |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
$ nova show 6e06aa0f-efa7-4730-86df-b32b47e53316
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

It will manifest itself in Horizon like this:

Attached to None

Attached to None

Now trying to delete this volume is going to fail:

$ cinder delete 52aa706df17d-4599-948c-87ae46d945b2
Delete for volume 52aa706d-f17d-4599-948c-87ae46d945b2 failed: Invalid volume:
Volume status must be available or error, but current status is: creating (HTTP 400)
(Request-ID: req-f45671de-ed43-401c-b818-68e2a9e7d6cb)
ERROR: Unable to delete any of the specified volumes.

As will an attempt to detach it from the non-existent instance:

$ nova volume-detach 6e06aa0f-efa7-4730-86df-b32b47e53316 093f32f6-66ea-451b-bba6-7ea8604e02c6
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

and no, force-delete does not work either.

Here's my approach for resolving this problem:

SSH onto your MariaDB server for OpenStack and open MariaDB to the cinder database:

$ mysql cinder

Unset the attachment in the volumes table by repeating the below command for each volume that requires detaching from a non-existent instance:

MariaDB [cinder]> UPDATE volumes SET attach_status='detached', instance_uuid=NULL, \
attach_time=NULL, status="available" WHERE id='3e56985c-541c-4bdd-b437-16b3d96e9932';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Back on your OpenStack client workstations you should now be able to delete the offending volumes:

$ cinder delete 3e56985c-541c-4bdd-b437-16b3d96e9932

Happy housekeeping :-)

Craige McWhirter
Automating Building and Synchronising Local & Remote Git Repos With Github

I've blogged about some git configurations in the past. In particular working with remote git repos.

I have a particular workflow for most git repos

  • I have a local repo on my laptop
  • I have a remote git repo on my server
  • I have a public repo on Github that functions as a back up.

When I push to my remote server, a post receive hook automatically pushes the updates to Github. Yay for automation.

However this wasn't enough automation, as I found myself creating git repos and running through the setup steps more often than I'd like. As a result I created gitweb_repo_build.sh which takes all the manual steps I go through to setup my workflow and automates it.

The script currently does the following:

  • Builds a git repo locally
  • Adds a README.mdwn and a LICENCE. Commits the changes.
  • Builds a git repo hosted via your remote git server
  • Adds to the remote server, a git hook for automatically pushing to github
  • Adds to the remote server, a git remote for github.
  • Creates a repo at GitHub a via API 3
  • Pushes the READEME and LICENCE to the remote, which pushes to github.

It's currently written in bash and has no error handling.

I've planned a re-write in Haskell which will have error handling.

If this is of use to you, enjoy :-)

Post Receive Git Hook to Push to Github

I self-host my own git repositories. I also use github as a backup for many of them. I have a use case for a number of them where the data changes can be quite large and pushing to both my own and github's remote services doubles my bandwidth usage in my mostly bandwidth contrained environments.

To get around those contraints, I wanted to only push to my git repository and have that service then push to github. This is how I went about it, courtesy of Matt Palmer

Assumptions:

  • You have your own git server
  • You have a github account
  • You're fairly familiar with git

Authentication

It's likely you have not used your remote server to connect to github. To make sure everything happens smoothly, you need to:

  • Add the SSH key for your user account on your server to the authorised keys on your github account
  • SSH just once from your server to github to accept the key from github.

Add a Remote for github

In the bare git repository on your server, you need to add the remote configuration. On a Debian server using gitweb, this file would be located as /var/cache/git/MYREPO/config. Add the below lines to it:

[remote "github"]
    url = git@github.com:MYACCOUNT/MYREPO.git
    fetch = +refs/heads/*:refs/remotes/github/*
    autopush = true

Add a post-receive Hook

Now we need to create a post-receive hook to process the push to github. Going with the previous example, edit /var/cache/git/MYREPO/hooks/post-receive

#!/bin/bash

for remote in $(git remote); do
        if [ "$(git config "remote.${remote}.autopush")" = "true" ]; then
                git push "$remote"
        fi
done

Happy automated pushing to github.

Enabling OpenStack Roles To Resize Volumes Via Policy

If you have volume backed OpenStack instances, you may need to resize them. In most usage cases you'll want to have un-privileged users resize the instances. This documents how you can modify the Cinder policy to allow tenant members assigned to a particular role to have permissions to resize volumes.

Assumptions:

  • You've already created your OpenStack tenant.
  • You've already created your OpenStack user.
  • You know how to allocate roles to users in tenants.

Select the Role

You will need to create or identify a suitable role. In this example I'll use "Support".

Modify policy.json

Once the role has been created or identified, add these lines to the /etc/cinder/policy.json on the Cinder API server(s):

"context_is_support": [["role:Support"]],
"admin_or_support":  [["is_admin:True"], ["rule:context_is_support"]],

Modify "volume_extension:volume_admin_actions:reset_status" to use the new context:

"volume_extension:volume_admin_actions:reset_status": [["rule:admin_or_support"]],

Add users to the role

Add users who need priveleges to resize volumes to the role SupportAdmin in their tennant.

The users you have added to the "Support" role should now be able to resize volumes.

Resizing a Root Volume for an Openstack Instance

This documents how to resize an OpenStack instance that has it's root partition backed by a volume. In this circumstance "nova resize" will not resize the diskspace as expected.

Assumptions:

Shutdown the instance you wish to resize

Check the status of the source VM and stop it if it's not already:

$ nova list
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | ACTIVE | -          | 
Running     | Tutorial=192.168.0.107 |
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
$ nova stop ResizeMe0
$ nova list
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
+--------------------------------------+-----------+---------+-----------+-
------------+---------------------------------------------+
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | SHUTOFF | -          | 
Running     | Tutorial=192.168.0.107 |
+--------------------------------------+-----------+---------+------------+-
------------+---------------------------------------------+

Identify and extend the volume

Obtain the ID of the volume attached to the instance:

$ nova show ResizeMe0 | grep volumes
| os-extended-volumes:volumes_attached | [{"id": "616dbaa6-f5a5-4f06-9855-fdf222847f3e"}]         |

Set the volume's state to be "available" to so we can resize it:

$ cinder reset-state --state available 616dbaa6-f5a5-4f06-9855-fdf222847f3e
$ cinder show 616dbaa6-f5a5-4f06-9855-fdf222847f3e | grep " status "
| status | available |

Extend the volume to the desired size:

$ cinder extend 616dbaa6-f5a5-4f06-9855-fdf222847f3e 4

Set the status back to being in use:

$ cinder reset-state --state in-use 616dbaa6-f5a5-4f06-9855-fdf222847f3e

Start the instance back up again

Start the instance again:

$ nova start ResizeMe0

Voila! Your old instance is now running with an increased disk size as requested.

This site is powered by ikiwiki.