I'm working on a large scale data centre migration for a large retail provider in Canada; from our 2 existing data centres in Western Canada to a new Tier 4 data centre in Central Canada with all new hardware across the board.
My existing environment is as follows:
vCentre 4.0
Data Centre 1
3 x ESX 4.0 hosts
Data Centre 2
Cluster 1
2 x ESX 3.5 hosts
vCentre 5.0
Data Centre 1
Cluster 1
2 x ESX 5.0 hosts
All existing storage is connected via FibreChannel to an IBM SAN.
New environment:
vCentre 5.1
Data Centre 3
Cluster 1:
32 x ESX 5.1 hosts
Cluster 2:
8 x ESX 5.1 hosts
All storage is connected via NFS over 10gbit ethernet to an NetApp SAN.
The existing data centres are interconnected using a 10gbit MPLS connection and the new Data Centre will be interconnected to the existing over a 10gbit MPLS connection with roughly 35ms latency.
My question is this... migrating these VM's are going to be a royal pain since the previous IT provider hasn't even come close to best practice in this environment. I'd like to potentially add one of the new NFS mounts in the new data centre to the existing ESX hosts in the current data centres, storage vmotion existing VM's to this NFS mount, power down, remove from existing inventory and re-add to inventory in the new hosts, re-IP, update DNS and go.
Does anyone see any issues with this or am I just crazy?