Restoring SQL Server Backup Files from Azure Blob Storage

If you are using Azure Blob Storage for SQL Server Backups, you need to know a couple of important details before you start with some significant project and as you should know (and in my head I am keep on hearing Grant Fritchey angrily declaring that there is no backup strategy that exists, if there is no restore strategy to be found in the plan).

The ACL permissions required by the Restore From URL operation in SQL Server (any SQL Server right now, starting with SQL Server 2012 page blobs and including SQL Server 2019 blob storage support that was started with SQL Server 2014) will require … [drumroll] … exclusive WRITE-permissions on the de underlying file(s).

Yes, you have read it right (and not write :)) – for reading your backup files on Azure Blob Storage, you will need to obtain write permissions as well, and for those working with SQL Server BackupToURL for years, it has been a kind of a pain in the neck for a long time – when creating a SAS Key (SQL Server 2014+), you will need to give write lock permissions on the file
This is also a thing that drives me pretty much crazy because every time you restore a file from Azure Blob Storage, it’s “Last Modified” property will be getting modified, thus complicating the archival and removal process of the files – thus directly affecting Lifecycle Management processes on the Blob Storage, for example. If you are dealing with small files, you might be pretty much OK, but if you are testing an old Multi-TB archive, you might just finish paying some couple of extra hundreds/or even thousands of dollars for that operation … unless you will manually intervene after.
To prove it, besides looking at what SAS Key signature even the SQL Server Management Studio generates (…&sp=rwdl as in read/write/delete/list on the container level as confirmed by the Microsoft Documentation)

Before going any further, just a word of advice on the security side – do not generate SAS Key in the portal, but use the Azure Storage Explorer, which will allow you to create a SAS Key that is focused rather on the specific container then on the whole Blob Storage Account.


If I create a SAS Key for a sample Azure Blob Storage Account with just Read & List permissions (&sp=rl) and note that naturally I did not put here the real container or the sas key, so do not try to just copy & paste, but rather create your own storage account and container with a SAS Key:

CREATE CREDENTIAL [https://imaginikotest.blob.core.windows.net/mybackupcontainer] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '?st=2020-01-01T07%3A11%3A00Z&se=2020-10-10T05%3A11%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=hehehedasecretkey';

My restore operation that just reads the data from the backups

RESTORE DATABASE [MaintenanceDB] FROM  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_1.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_2.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_3.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_4.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 5

will immediately face the following error message:

Msg 3201, Level 16, State 2, Line 2
Cannot open backup device 'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_1.bak'. 
Operating system error 5(Access is denied.).
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.

which is definitely not so nice.

But recreating the Credential with just added write permission (&sp=rwl) will change the game completely

DROP CREDENTIAL [https://imaginikotest.blob.core.windows.net/mybackupcontainer];
CREATE CREDENTIAL [https://imaginikotest.blob.core.windows.net/mybackupcontainer] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '?st=2020-01-01T07%3A11%3A00Z&se=2020-10-10T05%3A11%3A00Z&sp=rwl&sv=2018-03-28&sr=c&sig=hehehedasecretkey';

and restoring my database

RESTORE DATABASE [MaintenanceDB] FROM  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_1.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_2.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_3.bak',  URL = N'https://imaginikotest.blob.core.windows.net/mybackupcontainer/server01/MaintenanceDB/FULL/server01_MaintenanceDB_FULL_20200802_010005_4.bak' WITH  FILE = 1,  NOUNLOAD,  STATS = 5

will result in a much happier message and of course in a succesfully restored database:

5 percent processed.
10 percent processed.
15 percent processed.
20 percent processed.
25 percent processed.
30 percent processed.
35 percent processed.
40 percent processed.
45 percent processed.
50 percent processed.
55 percent processed.
60 percent processed.
65 percent processed.
70 percent processed.
75 percent processed.
80 percent processed.
85 percent processed.
90 percent processed.
95 percent processed.
100 percent processed.
Processed XXXXXX pages for database 'MaintenanceDB', file 'MaintenanceDB' on file 1.
Processed X pages for database 'MaintenanceDB', file 'MaintenanceDB_log' on file 1.
RESTORE DATABASE successfully processed 468903 pages in 38.819 seconds (94.368 MB/sec).

Yes, you NEED to add a write permission for the READING part of the RESTORE command to work.

Disaster Recovery & Azure Blob Storage RA-GRS

A great strategy for restoring data in a different region after a failure, (your own or Microsoft’s Azure region – and it has never been declared so far), is to use RA-GRS (Read Access Geo-Redundant Storage) that will allow you to read the data without any problem by directly accessing the secondary region even when you primary is still online. It is a kind of an Availability Groups Secondary replica, as to say :)

If we a doing a bubble-test for our Disaster Recovery strategy, we might want to point the restores to the local to the DR servers datacenter, you know – to control the costs, because if you are dealing with big multi-TB databases, you might just want to make sure that the costs are under control.
In order to access the secondary replica of the Azure Blob Storage we need to add the prefix [secondary] after the name of our primary storage account and that would result in my test case to become a [https://imaginikotest.secondary.blob.core.windows.net/mybackupcontainer] meaning that we shall need to set up the respective credential as well !
Yes, because that is a brand new credential, but please, do not instantly go creating it!

Because …

[slow drumroll]

YOU CAN’T READ DIRECTLY FROM THE NON-WRITEABLE REPLICA OF THE AZURE STORAGE ACCOUNT !

Yeah …
Just like in the case of the availability Groups, your secondary replica does not allow you to do any writes, until you do the failover (in the case of the Azure Storage, a massive failure in the primary data centre should take place first).

Well, after some cries – how can you solve it ?
By creating a different storage account and automatically copying (with AZCopy, for example) the data to the new Azure Storage Account within the same region (no extra-costs for the data transfer because of the data-centre bounds will be charged! For the inter-datacentre operation that it is.) The good news (real question mark, since the total number of files you will be transferring and their respective sizes will affect execution times greatly) is that you can get nice speed for some scenarios and I can report that in one case (with almost a hundred thousand of files) we have managed to do a transfer of 1.3 TB in ~35 minutes. But yeah, if you need to restore everything in under an hour, that MIGHT be a liiittle bit different :)
After copying the data to a new local to the data-centre Azure Storage Account, you will need to create the respective SAS Key again and then you can start restoring your databases.

Yes, this WILL affect your RTO times very significantly and you better test & experiment and get the client acceptance before implementing it for real.

Availability Groups

What if you are initializing an Availability Group and you are dealing with databases that just simply too big for the direct seeding (or you are using SQL Server 2012/2014), can you restore from the same file both on primary and secondary replicas at the same time?
The answer is of course – NO!
If you want to run restores in parallel (for example if you have 3+ replicas), the best way would be (again!) to copy the backups to the side (maybe a different storage account or a folder) and after updating the restore command with the new path, to run against the new location.

Final Thoughts

I would love to be able to run the SQL Server backup restores without files being exclusively locked and without write permissions, and even though this is by design, it is definitely that will cause a lot of people to break their heads once they will start working with Azure Storage and SQL Server RestoreFromURL seriously.
I do not expect this to change at all though … :(

6 thoughts on “Restoring SQL Server Backup Files from Azure Blob Storage

  1. alzdba

    Also good to keep in mind is you can compress the backup files directly from your backup statement
    and you can have the backup write to multiple files !

    You have to use the ‘With checksum’ if you want to be able to restore to a Azure Managed Instance !

    1. Niko Neugebauer Post author

      Well, the compression and the striping have been quite the topics for me as you can see SQL Server Backup Throttling in Azure VMs and Striping Backups to Azure Blob Storage.
      If you are dealing with Multi-TB databases you will HAVE to use them both anyway, because of the max blob size support at the old Azure Blob Storage API that even SQL Server 2019 is using.
      Regarding the “With Checksum” – I would be shocked to see any DBA not using this option … and how I wish Microsoft would simply default it …

  2. chung wong

    I received split backup files from vendor and want to restore the database in Azure SQL Managed Instance. I want to know if I can restore split backup files in Azure SQL Managed Instance?

Leave a Reply to Yaniv Etrogi Cancel reply

Your email address will not be published. Required fields are marked *