With Cisco’s recent announcement of OTV, we are starting to see the impact that server virtualization has on data and data management. It almost seems that “data virtualization” has not been able to keep up with “server and I/O virtualization”. In a way, the laws of physics eventually kick in. With OTV, in the recent Cisco demo, you can Vmotion a SQL Server over a 400KM WAN in about 30 seconds with no disruption, but you just cannot move the associated 8GB Virtual Disk (VMDK) in a relevant timeframe. There are tools to help, caching etc, but here’s the rub. The responsibility falls on the data/storage architects and DBA’s to devise architectures that allow for OTV flexibility without trying to break the laws of physics inherent in moving the associated virtual disks.
The point here is not OTV. OTV is an excellent technology, and it looks like Cisco has executed it correctly. It is though, a natural (not insignificant, but still natural) next step in the maturation of Virtualization. What has to be thought through are the tools from companies such as EMC/Data Domain on how to manage the duplication/deduplication and physical spread of the data if OTV is a requirement for your business.
It is still too early to tell exactly who the data management winners will be, but OTV is the type of technology that will force every IT organization to recognize that they have an implicit data responsibility. And that responsibility requires tools, capabilities and talent that may or may not be in their current tool and talent pool. That leaves us with new opportunities for data & storage architects in this nascent virtualization space. They will have some interesting job opportunities in the next 18 months!