A little over a month ago, I was installing VMware ESXi on a Cisco UCS blade and noticed something odd during the installation. I posted a tweet about the incident. Here’s the text of the tweet in case the link above stops working:
Interesting…this #UCS blade has local disks but all disks are showing as remote during #ESXi install. Odd…
Several people responded, indicating they’d run into similar situations. No one—at least, not that I recall—was able to tell me why this was occurring, only that they’d seen it happen before. And it wasn’t just limited to Cisco UCS blades; a few people posted that they’d seen the behavior with other hardware, too.
This morning, I think I found the answer. While reading this post about scratch partition best practices on VMware ESXi Chronicles, I clicked through to a VMware KB article referenced in the post. The KB article discussed all the various ways to set the persistent scratch location for ESXi. (Good article, by the way. Here’s a link.)
What really caught my attention, though, was a little blurb at the bottom of the KB article in reference to examples where scratch space may not be automatically defined on persistent storage. Check this out (emphasis mine):
2. ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.
There’s the answer: although these drives are physically inside the server and are local to the server, they are considered remote during the VMware ESXi installation because they are SAS drives. Mystery solved!