I received this question a couple of times already and I have found conflicting information about it in forums and documentation. So hopefully we’ll help setting the record straight here about the possibility of using the Direct SAN transport mode in Veeam Backup & Replication with Raw Device Mappings in Virtual compatibility mode (vRDMs), in VMware vSphere environments.
Does it work? Yes, it does. Read on.
In virtual infrastructures, it is possible to assign storage to virtual machines not only using virtual disks, but also directly assigning one or more LUNs to a VM. This feature has different names depending on the hypervisor technology. In VMware vSphere, it is called “Raw Device Mappings”, often abbreviated with “RDM”. The equivalent in Hyper-V terms would be “Pass-through disks”.
HERE you can find a good description of what a RDM is.
The Great Debate
RDMs have always been a hot topic among virtuailzation IT pros and architects, with fiery discussions about their actual usefulness or benefits.
Historically, there were mainly three reasons to use RDMs:
- Disk size
- In-guest clustering
All these reasons to use RDMs are fading away. You can create VMDKs up to 62 TB in vSphere 5.5. Performance of a VMFS-backed virtual disk compared to an RDM is practically identical (this has been true for years, VMware further proves the point in this blog post from last year)
In-guest clustering is a bit more complex and will probably be the only reason we’ll still see RDMs in the near future. It is worth noting though that modern applications are switching to different clustering techniques that do not strictly require shared storage (Exchange’s DAG or SQL Server’s AlwaysOn come to mind)
Not all RDMs are created equal
One unique feature of vSphere is the possibility of having two types of RDMs: virtual compatibility mode (vRDM) and physical compatibility mode (pRDM).
In a pRDM, all SCSI commands are passed to the storage device (with an exception). One of the biggest cons in using pRDMs is that it does not allow you to leverage cool features like Storage vMotion, Fault Tolerance, Cloning… and most of all VM Snapshots – pretty much what every agentless VADP-based backup / replication solution relies on.
In a vRDM, only READ and WRITE commands are passed to the underlying storage device – all other characteristics are “masked” (virtualized) by the hypervisor. And you can leverage all its cool storage features.
SAN transport mode for vRDMs in Veeam B&R
To make it work, you essentially have to treat the Veeam Backup proxy like it was an ESXi host: you have to give it access to all LUNs backing the VMFS datastores your VMs reside in. This is the same for RDM LUNs. But remember: access should be read-only for all LUNs (VMFS and/or RDM) whenever possible.
In this example I used the well-known OpenFiler storage appliance, the required steps to present the LUNs to your Veeam Backup proxy will be different depending on your SAN make and model.
Note that the LUN is (and must always remain) offline.
In my test, I configured a VM with three disks, one of which is a vRDM.
After a Rescan of the backup proxy from within B&R’s interface, you’ll be ready to back up vRDMs using Direct SAN mode. As always, you can confirm this by looking at the job statistics.