When deleting a volume snapshot in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the snapshot.
There are a number of reasons why a snapshot may be reported by Ceph as unable to be deleted, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.
If you were to look at the snapshots in Cinder, the status is usually error
or error_deleting
:
% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| ID | Volume ID | Status | Display Name | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z | 40 |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z | 40 |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z | 40 |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z | 40 |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z | 40 |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
When you check Ceph you may find the following snapshot list:
# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME SIZE
2069 snapshot-2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 40960 MB
2526 snapshot-52c43ec8-e713-4f87-b329-3c681a3d31f2 40960 MB
2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB
The astute will notice that there are only 3 snapshots listed in Ceph yet 5 listed in Cinder. We can immediately exclude 47fbbfe8 which is available in both Cinder and Ceph, so there's no issues there.
You will also notice that the snapshots with the status error
are not in Ceph and the two with error_deleting
are. My take on this is that for the status error
, Cinder never received the message from Ceph stating that this had been deleted successfully. Whereas for the status error_deleting
status, Cinder had been unsuccessful in offloading the request to Ceph.
Each status will need to be handled separately , I'm going to start with the error_deleting
snapshots, which are still present in both Cinder and Ceph.
In MariaDB, set the status from error_deleting
to available
:
MariaDB [cinder]> update snapshots set status='available' where id = '2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB [cinder]> update snapshots set status='available' where id = '52c43ec8-e713-4f87-b329-3c681a3d31f2';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Check in Cinder that the status of these snapshots has been updated successfully:
% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| ID | Volume ID | Status | Display Name | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z | 40 |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z | 40 |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z | 40 |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z | 40 |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z | 40 |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
Delete the newly available snapshots from Cinder:
% cinder snapshot-delete 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0
% cinder snapshot-delete 52c43ec8-e713-4f87-b329-3c681a3d31f2
Then check the results in Cinder and Ceph:
% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| ID | Volume ID | Status | Display Name | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z | 40 |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z | 40 |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z | 40 |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME SIZE
2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB
So we are done with Ceph now, as the error
snapshots do not exist there. As they only exist in Cinder, we need to mark them as deleted in the Cinder database:
MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = '07d75992-bf3f-4c9c-ab4e-efccdfc2fe02';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = 'a595180f-d5c5-4c4b-a18c-ca56561f36cc';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
Now check the status in Cinder:
% cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| ID | Volume ID | Status | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z | 40 |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
Now your errant Cinder snapshots have been removed.
Enjoy :-)