In a follow-up to the setup article I last posted, I wanted to outline what we sought in the CTO Advisor’s cloud migration project.
Scenario:
We defined a circumstance common to today (in a covid19 related world) in which an IT Organization needed to rapidly ramp up their VDI (Horizon View) infrastructure, so they could support a rapid growth of requirements, as their workforce became almost entirely work from home. Surely, they could have added infrastructure on-premises to support the growth, but the scale down of such an architecture, once this scenario finally does resolve means that they’d have quite a bit of hardware now onsite that would no longer be necessary, not to forget that the acquisition from equipment to implementation would not be viable for the business from an immediate need perspective.
So, the team set out to evaluate the difficulties of migrating the existing environment to the cloud (Specifically the public cloud providers [AWS, Azure, GCP, and the newly offered OCI platform])
Goals:
Our hope was to satisfy rapidly, the need for:
- Continuity
- Limited Learning Curve
- Ease of management (consistency with existing methods)
- Ease of migration to new platform
- Overall functionality
- Scalability up and down.
While there were a number of other goals, these were defined as those most critical to the success of the project.
A number of critical milestones were hit, and a few barriers to successful completion. The recordings of these implementations were videoed and published on the YouTube BuildDay Live page here. Great work on these videos by @DemitasseNZ, @Geekazine and @ThomTalksTech.
For example, the model for Cloud Migration was changed by Microsoft as we began our process, so that their newer build, removing the CloudSimple model previously in place was deemed to be too new, and thus testing determined to be untrustworthy. Therefore, we decided that we would not attempt a build on Azure. Obviously, this is a barrier, but other details proved to be just differentiations. Building what we deemed to be a success-criteria, that of linked mode functionality was a sticking point as well. With Google’s Cloud Platform, it proved to be something at this point, we found impassable, and with Amazon, while complex, the HCX did allow for the needed implementation.
When we began, the above mentioned HCX (a component of VMware’s NSX, or virtual networking toolset) did seem critical. OCI didn’t require that at all. AWS has a wizard for implementing this, making it easy.
Ultimately, we determined that the OCI implementation, while some pieces did take a bit longer, was the easiest to facilitate. One belief is that because these wizards rolled out VMware in a “Nested” architecture, in the OCI model, it rolled out the full SDDC (Software Defined Data Center) to bare metal. AWS, and GCP used the nested model, meaning that the implementation of vSphere, rather than loading up on bare metal, was loaded up as a virtual machine, within the hosted environment. Thus, OCI placed the hardware control into the hands of the VMware administrator. The OCI model allows for the utilization of the hardware fully, and the addition of virtual procs, nics, and ram directly instead of obfuscated by the virtualization layer.
We did feel a functional comfort on the OCI side. It felt to the members of the CTO Advisor, as if we were performing the management of our datacenter, but simply stretched. Once linked mode was established, we managed the OCI environment through the same console, with little or no differentiation. Essentially, if you’re already a VMware person, you’ll need to learn very little.
There are distinct nuances to each solution, for example, the MegaPort pipe from the CTOADC (CTO Advisor Data Center) to Oracle was dynamic. It could be turned up or down depending on the requirement for bandwidth, but even more significant, was that on the Oracle side, no restarting of router, or reconfiguration was required. The OCI accepted whatever feed was coming to it, at whatever bandwidth. This functionality proved itself to be quite helpful, and unique. This was particularly beneficial when the applications and data out of the CTOADC and to Oracle’s infrastructure, as well as establishing the linked connection for linked mode Data Center management.