Let's say you've got an OpenStack build you're getting ready to go live with. Assume also that you're performing some, ahem, robustness testing to see what breaks and prevent as many surprises as possible prior to going into production. OpenStack controller servers are being rebooted all over the shop and during this background chaos, punters are still trying to launch instances with vary degrees of success.
Once everything has settled down, you may find that some lucky punters have deleted the unsuccessful instances but the volumes have been left behind. This isn't initially obvious from the cinder CLI without cross checking with nova:
$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | B
ootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
| 3e56985c-541c-4bdd-b437-16b3d96e9932 | in-use | | 3 | block |
true | 6e06aa0f-efa7-4730-86df-b32b47e53316 |
+--------------------------------------+-----------+--------------+------+-------------+--
--------+--------------------------------------+
$ nova show 6e06aa0f-efa7-4730-86df-b32b47e53316
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.
It will manifest itself in Horizon like this:
Now trying to delete this volume is going to fail:
$ cinder delete 52aa706df17d-4599-948c-87ae46d945b2
Delete for volume 52aa706d-f17d-4599-948c-87ae46d945b2 failed: Invalid volume:
Volume status must be available or error, but current status is: creating (HTTP 400)
(Request-ID: req-f45671de-ed43-401c-b818-68e2a9e7d6cb)
ERROR: Unable to delete any of the specified volumes.
As will an attempt to detach it from the non-existent instance:
$ nova volume-detach 6e06aa0f-efa7-4730-86df-b32b47e53316 093f32f6-66ea-451b-bba6-7ea8604e02c6
ERROR (CommandError): No server with a name or ID of '6e06aa0f-efa7-4730-86df-b32b47e53316' exists.
and no, force-delete does not work either.
SSH onto your MariaDB server for OpenStack and open MariaDB to the cinder database:
$ mysql cinder
Unset the attachment in the volumes table by repeating the below command for each volume that requires detaching from a non-existent instance:
MariaDB [cinder]> UPDATE volumes SET attach_status='detached', instance_uuid=NULL, \
attach_time=NULL, status="available" WHERE id='3e56985c-541c-4bdd-b437-16b3d96e9932';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Back on your OpenStack client workstations you should now be able to delete the offending volumes:
$ cinder delete 3e56985c-541c-4bdd-b437-16b3d96e9932
Happy housekeeping :-)