mcwhirter.com.au
mcwhirter.com.au
RSS Atom Add a new post titled:
Craige McWhirter
How To Delete a Cinder Snapshot with a Status of error or error_deleting With Ceph Block Storage

When deleting a volume snapshot in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the snapshot.

There are a number of reasons why a snapshot may be reported by Ceph as unable to be deleted, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the snapshots in Cinder, the status is usually error or error_deleting:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

When you check Ceph you may find the following snapshot list:

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2069 snapshot-2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 40960 MB
  2526 snapshot-52c43ec8-e713-4f87-b329-3c681a3d31f2 40960 MB
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

The astute will notice that there are only 3 snapshots listed in Ceph yet 5 listed in Cinder. We can immediately exclude 47fbbfe8 which is available in both Cinder and Ceph, so there's no issues there.

You will also notice that the snapshots with the status error are not in Ceph and the two with error_deleting are. My take on this is that for the status error, Cinder never received the message from Ceph stating that this had been deleted successfully. Whereas for the status error_deleting status, Cinder had been unsuccessful in offloading the request to Ceph.

Each status will need to be handled separately , I'm going to start with the error_deleting snapshots, which are still present in both Cinder and Ceph.

In MariaDB, set the status from error_deleting to available:

MariaDB [cinder]> update snapshots set status='available' where id = '2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='available' where id = '52c43ec8-e713-4f87-b329-3c681a3d31f2';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Check in Cinder that the status of these snapshots has been updated successfully:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

Delete the newly available snapshots from Cinder:

% cinder snapshot-delete 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0
% cinder snapshot-delete 52c43ec8-e713-4f87-b329-3c681a3d31f2

Then check the results in Cinder and Ceph:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

So we are done with Ceph now, as the error snapshots do not exist there. As they only exist in Cinder, we need to mark them as deleted in the Cinder database:

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = '07d75992-bf3f-4c9c-ab4e-efccdfc2fe02';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = 'a595180f-d5c5-4c4b-a18c-ca56561f36cc';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Now check the status in Cinder:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |   Status  |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+

Now your errant Cinder snapshots have been removed.

Enjoy :-)

Craige McWhirter
How To Resolve a Volume is Busy Error on Cinder With Ceph Block Storage

When deleting a volume in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the volume because the volume was busy:

2015-05-21 23:31:41.160 16911 ERROR cinder.volume.manager [req-6f77ef4d-bbff-4ff4-8a3e-4c6b264ac5ca \
04b7cb61dd3f4f2f8f80bbd9833addbd 5903e3bda1e840d492fe79fb840acacc - - -] Cannot delete volume \
f8867d43-bc82-404e-bcf5-6d345c32269e: volume is busy

There are a number of reasons why a volume may be reported by Ceph as busy, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the volume in Cinder, that status is usually available, the record looks in order. When you check Ceph, you'll see that the volume still exists there too.

% cinder show f8867d43-bc82-404e-bcf5-6d345c32269e | grep status
|    status    |    available    |

 # rbd -p my.ceph.cinder.pool ls | grep f8867d43-bc82-404e-bcf5-6d345c32269e
 volume-f8867d43-bc82-404e-bcf5-6d345c32269e

Perhaps there's a lock on this volume. Let's check for locks and then remove them if we find one:

# rbd lock list my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e

If there are any locks on the volume, you can use lock remove using the id and locker from the previous command to delete the lock:

# rbd lock remove <image-name> <id> <locker>

What if there are no locks on the volume but you're still unable to delete it from either Cinder or Ceph? Let's check for snapshots:

# rbd -p my.ceph.cinder.pool snap ls volume-f8867d43-bc82-404e-bcf5-6d345c32269e
SNAPID NAME                                              SIZE
  2072 snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3 25600 MB

When you attempt to delete that snapshot you will get the following:

# rbd snap rm my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3
rbd: snapshot 'snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3' is protected from removal.
2015-05-22 01:21:52.504966 7f864f71c880 -1 librbd: removing snapshot from header failed: (16) Device or resource busy

This reveals that it was the snapshot that was busy and locked all along.

Now we need to unprotect the snapshot:

# rbd snap unprotect my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3

You should now be able to delete the volume and it's snapshot via Cinder.

Enjoy :-)

Craige McWhirter
Rebuilding An OpenStack Instance and Keeping the Same Fixed IP

OpenStack and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata.

However if you have a shared storage back end, such as Ceph, you're out of luck as this function is not for you.

Fortunately, there is another way.

Prepare for the Rebuild:

Note the fixed IP address of the instance that you wish to rebuild and the network ID:

$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |
$ export FIXED_IP=192.168.24.14
$ neutron floatingip-list | grep 216.58.220.133
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |                  | 216.58.220.133      |
$ export FLOATIP_ID=ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron net-show DemoTutorial | grep " id "
| id              | 9068dff2-9f7e-4a72-9607-0e1421a78d0d |
$ export OS_NET=9068dff2-9f7e-4a72-9607-0e1421a78d0d

You now need to delete the instance that you wish to rebuild:

$ nova delete demoinstance0
Request to delete server demoinstance0 has been accepted.

Manually Prepare the Networking:

Now you need to re-create the port and re-assign the floating IP, if it had one:

$ neutron port-create --name demoinstance0 --fixed-ip ip_address=$FIXED_IP $OS_NET
Created a new port:
+-----------------------+---------------------------------------------------------------------------------------+
| Field                 | Value                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "eb5db27f-edad-480e-92cb-1f8fec8848a8", "ip_address": "192.168.24.14"}  |
| id                    | c1927578-451b-4682-8888-55c7163898a4                                                  |
| mac_address           | fa:16:3e:5a:39:67                                                                     |
| name                  | demoinstance0                                                                         |
| network_id            | 9068dff2-9f7e-4a72-9607-0e1421a78d0d                                                  |
| security_groups       | 5898c15a-4670-429b-a414-9f59671c4d8b                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | gsu7j52c50804cf3aad71b92e6ced65e                                                      |
+-----------------------+---------------------------------------------------------------------------------------+
$ export OS_PORT=c1927578-451b-4682-8888-55c7163898a4
$ neutron floatingip-associate $FLOATIP_ID $OS_PORT
Associated floating IP ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron floatingip-list | grep $FIXED_IP
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 | 192.168.24.14   | 216.58.220.133     | c1927578-451b-4682-8888-55c7163898a4 |

Re-build!

Now you need to boot the instance again and specify port you created:

$ nova boot --flavor=m1.tiny --image=MyImage --nic port-id=$OS_PORT demoinstance0
$ nova show demoinstance0 | grep network
| DemoTutorial network                       | 192.168.24.14, 216.58.220.133                     |

Now your rebuild has been completed, you've got your old IPs back and you're done. Enjoy :-)

Craige McWhirter
Attaching Multiple Network Interfaces and Floating IPs to OpenStack Instances with Neutron

There are a number of use cases where you may need to connect multiple floating IPs to existing OpenStack instances. However the functionality to do this is not exposed via the Horizon Dashboard. This is how I go about attaching multiple network interfaces and floating IPs to OpenStack instances with Neutron.

Assumptions:

Port Creation and Assignment

When you have your environment sourced appropriately, get a list of networks for this tenant:

% neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 85314baa-a022-4dd1-918c-a73c83c8cad6 | ext-net      | 9248bc58-6cfe-4ff8-b33e-286a60c96c6d 999.999.999.0/23 |
| ee31dc0e-e226-423d-a7fe-f564dc17614e | DemoTutorial | 5821de82-3843-46ce-a796-c801bf40fd4c 192.168.71.0/24  |
+--------------------------------------+--------------+-------------------------------------------------------+

We're interested in the non-external network. In this case "DemoTutorial". I normally set this to $OS_NET. Now we can create a new port on that network.

% export OS_NET=ee31dc0e-e226-423d-a7fe-f564dc17614e
% neutron port-create $OS_NET
Created a new port:
+-----------------------+---------------------------------------------------------------------------------------+
| Field                 | Value                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "af150a1e-067a-4641-89a4-24c5b6b8fe3b", "ip_address": "192.168.71.180"} |
| id                    | fd2f78df-cf78-4394-84eb-9e37ed1e5624                                                  |
| mac_address           | fa:54:6e:f2:ce:a9                                                                     |
| name                  |                                                                                       |
| network_id            | ee31dc0e-e226-423d-a7fe-f564dc17614e                                                  |
| security_groups       | b1240686-7ad9-4d29-a679-d219f76648ca                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | abcd639c50804cf3end71b92e6ced65e                                                      |
+-----------------------+---------------------------------------------------------------------------------------+

We now need to note the id or as I do, assign it to $PORT_ID. Next we fire up nova. I'm going to assume that you know either the instance name or ID and have assigned it to $INSTANCE.

% export PORT_ID=fd2f78df-cf78-4394-84eb-9e37ed1e5624
% export INSTANCE=3c7ae1b9-8111-4f15-9945-75e0af157ead
% nova interface-attach --port-id $PORT_ID $INSTANCE

You should now have successfully added a second network interface to your OpenStack instance. Let's double check that:

% nova show $INSTANCE | grep network
| DemoTutorial network                 | 192.168.71.180, 192.168.71.181

Great! Now you have two internal IP addresses, one for each port assigned to that tenant.

Assigning Floating IPs

You can now add floating IPs either via the Horizon Dashboard or via the neutron client. I'll cover how to do this via the CLI. Fire up neutron, locate the original port and assign it's UUID to $PORT_ID0:

% neutron port-list | grep 192.168.71.181
fa:46:7e:21:4f:f3 | {"subnet_id": "8f987932-48ee-4262-8b44-0c910512a387", "ip_address": "192.168.71.181"} |
% export PORT_ID0=8f987932-48ee-4262-8b44-0c910512a387

Then we get a list of available floating IPs and assign those to variables too:

% neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                   | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 390e4676-0e05-40c3-9012-e5d27eb85dbe |                  | 999.999.999.123     |         |
| 16f7ca27-1d11-4967-9f0c-04f578590b01 |                  | 999.999.999.124     |         |
| f983b10d-454c-4c19-8f65-d9b96c4d7aa6 |                  | 999.999.999.125     |         |
+--------------------------------------+------------------+---------------------+---------+
% export FIP0=16f7ca27-1d11-4967-9f0c-04f578590b01
% export FIP1=f983b10d-454c-4c19-8f65-d9b96c4d7aa6
% neutron floatingip-associate $FIP0 $PORT_ID
Associated floating IP 16f7ca27-1d11-4967-9f0c-04f578590b01
% neutron floatingip-associate $FIP1 $PORT_ID0
Associated floating IP f983b10d-454c-4c19-8f65-d9b96c4d7aa6

We can then verify this assignment:

% neutron floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id                                   | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 390e4676-0e05-40c3-9012-e5d27eb85dbe |                  | 999.999.999.123     |         |
| 16f7ca27-1d11-4967-9f0c-04f578590b01 | 192.168.71.180   | 999.999.999.124     |         |
| f983b10d-454c-4c19-8f65-d9b96c4d7aa6 | 192.168.71.181   | 999.999.999.125     |         |
+--------------------------------------+------------------+---------------------+---------+

For good measure you can double check how Nova sees this assignment:'

% nova show $INSTANCE | grep network
| DemoTutorial network                 | 192.168.71.180, 192.168.71.181, 999.999.999.124, 999.999.999.125

You're done :-)

Craige McWhirter
A Little Vim Hack For Go

After LCA2015 I've starting playing with Go (I blame Sven Dowideit). If you already use VIM-YouCompleteMe) then you should be right for most things Go. However I tinker in a few languages and you'll never guess that they have different rules around style and formatting of code.

Go is one out for me requiring settings unique to Go among the languages I tinker in. I made the below changes to my ~/.vimrc to suit Go:

function! GoSettings()
    set tabstop=7
    set shiftwidth=7
    set noexpandtab
endfunction
autocmd BufNewFile,BufFilePre,BufRead *.go :call GoSettings()

Now when I edit a file with the .go extension, my Vim session will be formatting the file correctly from the start.

You can also configure Vim to run gofmt but I preferred this approach.

Enjoy :-)

Craige McWhirter
Configuring CoreOS Toolbox to Use Debian

The toolbox command in CoreOS uses Fedora by default. If you'd rather it used Debian by default, you can add the following lines to .toolboxrc:

TOOLBOX_DOCKER_IMAGE=debian
TOOLBOX_DOCKER_TAG=jessie

When you next run toolbox, you should see it pull down the requested image.

$ toolbox
Pulling repository debian
835c4d274060: Download complete
511136ea3c5a: Download complete
16386e29a1f4: Download complete
Status: Downloaded newer image for debian:jessie
core-debian-jessie
Spawning container core-debian-jessie on /var/lib/toolbox/core-debian-jessie.
Press ^] three times within 1s to kill container.
root@myserver:~#

It's that simple.

Craige McWhirter
Managing KVM Console Logs for Nova

Update

It turns out that this DOES NOT work around bug 832507. Please do not use this thinking that it does.

There's a problem with console logs. That was a hard sentence to word. I wanted to say a number of versions of it but each made it sound like the problem was with either OpenStack, Nova, libvirt / qemu-kvm. I didn't particularly feel like pointing the finger as the solution appears to need to come from a number of directions...

So the problem is that it is entirely possible when running KVM hypervisors with OpenStack Nova to have the compute nodes disks fill up when instance(s) console logs get a little chatty.

It's not a desirable event and the source will catch you by surprise.

There's currently no way to manage these KVM console logs via either qemu-kvm or via OpenStack / Nova so I wrote manage_console_logs.sh (github) (bitbucket) to do this.

manage_console_logs.sh operates as follows:

  • Creates a lock file using flock to ensure that the script is not already running.
  • Checks the size of each console log
  • If it's greater than the nominated size it's truncated using tail.

That's it. A pretty straight forward method for ensuring your compute node disks do not fill up.

You should schedule this via cron at you desired frequency and add a monitoring check to ensure it's doing it's job as expected.

Craige McWhirter
An Unexpected Journey

Earlier this year I was braced for a hard and personally gruelling year. What I didn't expect however, was that after my return to Sydney that an old friend would reveal how she truly felt about me. It was a brave moment for her but fortunately for us both I'd harboured the same feelings toward her.

How was I to know,

That you would rise,

Like a burning angel in my eyes

As expected, this year has certainly lived up to and exceeded those difficult expectations to be undoubtedly the most challenging year of my life. However I've been fortunate to balance that by now having the most amazing woman by my side.

Fiona's love, support, advice and humour has been an unprecedented experience in my life. I've found a lover and a partner in crime with whom I've formed an indomitable team as we've had each others backs through some rather unbelievable trials.

Which brings me to Paris. We walked to Pont des Arts, the bridge across the Seine and added our padlock at the centre of the bridge, amongst the thousands of others and made a wish.

Then we kissed.

I asked Fiona what she wished for but was politely told it was a secret.

I said I would tell her what I wished for, then dropped to one knee and paused for long enough to read the unmistakeable expression of "What are you doing? Get up you idiot!" written across Fiona's face before I produced an engagement ring and asked Fiona to marry me.

Fiona's Engagement Ring

Fiona's Engagement Ring

Fiona said "yes!".

Before too long,

We'll be together and no one will tear us apart

Before too long,

The words will be spoken I know all the action by heart

Earlier in the night I'd slipped an engagement pendant into Fiona's pocket which she discovered and put around my neck before we celebrated with a meal opposite Notre Dame cathedral.

Craige's engagement pendant

Craige's engagement pendant

I still shake my head in disbelief at how two such independent people have found themselves in a place where they cannot imagine their life without the other. Yet that's where we are.

Our life going forward is going to complicated and challenging, however there will be an awful lot of love and we'll have each other's backs all the way.

Thank you Fiona, for bringing such love and light into my life.

I've found the one I've waited for

All this time I've loved you

And never known your face

All this time I've missed you

And searched this human race

Here is true peace

Here my heart knows calm

Safe in your soul

Bathed in your sighs

Want to stay right here

Until the end of time

Sometimes, dreams do come true.

Craige McWhirter
Deleting Root Volumes Attached to Non-Existent Instances

Let's say you've got an OpenStack build you're getting ready to go live with. Assume also that you're performing some, ahem, robustness testing to see what breaks and prevent as many surprises as possible prior to going into production. OpenStack controller servers are being rebooted all over the shop and during this background chaos, punters are still trying to launch instances with vary degrees of success.

Once everything has settled down, you may find that some lucky punters have deleted the unsuccessful instances but the volumes have been left behind. This isn't initially obvious from the cinder CLI without cross checking with nova:

$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | B
ootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
| 3e56985c-541c-4bdd-b437-16b3d96e9932 | in-use    |              |  3   |    block    |
 true   | 6e06aa0f-efa7-4730-86df-b32b47e53316 |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
$ nova show 6e06aa0f-efa7-4730-86df-b32b47e53316
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

It will manifest itself in Horizon like this:

Attached to None

Attached to None

Now trying to delete this volume is going to fail:

$ cinder delete 52aa706df17d-4599-948c-87ae46d945b2
Delete for volume 52aa706d-f17d-4599-948c-87ae46d945b2 failed: Invalid volume:
Volume status must be available or error, but current status is: creating (HTTP 400)
(Request-ID: req-f45671de-ed43-401c-b818-68e2a9e7d6cb)
ERROR: Unable to delete any of the specified volumes.

As will an attempt to detach it from the non-existent instance:

$ nova volume-detach 6e06aa0f-efa7-4730-86df-b32b47e53316 093f32f6-66ea-451b-bba6-7ea8604e02c6
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.

and no, force-delete does not work either.

Here's my approach for resolving this problem:

SSH onto your MariaDB server for OpenStack and open MariaDB to the cinder database:

$ mysql cinder

Unset the attachment in the volumes table by repeating the below command for each volume that requires detaching from a non-existent instance:

MariaDB [cinder]> UPDATE volumes SET attach_status='detached', instance_uuid=NULL, \
attach_time=NULL, status="available" WHERE id='3e56985c-541c-4bdd-b437-16b3d96e9932';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Back on your OpenStack client workstations you should now be able to delete the offending volumes:

$ cinder delete 3e56985c-541c-4bdd-b437-16b3d96e9932

Happy housekeeping :-)

Craige McWhirter
Automating Building and Synchronising Local & Remote Git Repos With Github

I've blogged about some git configurations in the past. In particular working with remote git repos.

I have a particular workflow for most git repos

  • I have a local repo on my laptop
  • I have a remote git repo on my server
  • I have a public repo on Github that functions as a back up.

When I push to my remote server, a post receive hook automatically pushes the updates to Github. Yay for automation.

However this wasn't enough automation, as I found myself creating git repos and running through the setup steps more often than I'd like. As a result I created gitweb_repo_build.sh which takes all the manual steps I go through to setup my workflow and automates it.

The script currently does the following:

  • Builds a git repo locally
  • Adds a README.mdwn and a LICENCE. Commits the changes.
  • Builds a git repo hosted via your remote git server
  • Adds to the remote server, a git hook for automatically pushing to github
  • Adds to the remote server, a git remote for github.
  • Creates a repo at GitHub a via API 3
  • Pushes the READEME and LICENCE to the remote, which pushes to github.

It's currently written in bash and has no error handling.

I've planned a re-write in Haskell which will have error handling.

If this is of use to you, enjoy :-)

This site is powered by ikiwiki.