This blog entry was originally posted on March 18th, 2015
In many migration to cloud conversations, the term “lift-and-shift” appears. But what does this term actually mean and what is the user expecting? Ultimately what the user wants, to my understanding, is to migrate an application or applications from the existing environment to a new cloud-based environment without changing the logic or the way the application works.
But that is exactly why the term is fuzzy. There are several ways this can be done, so lift-and-shift does not describe a clearly identifiable approach and why it should be used with a lot of care. We typically rather replace the term with the word “Re-Host”, but immediately attach to that term a description of the actual migration scenario used. Let me describe this situation in a little more detail, starting with an important question.
Why migrate to the cloud?
Companies looking for “lift-and-shift” approaches typically consider it for any of the following four reasons:
· Datacenter: Companies need to close one or more datacenters quickly because of end of lease, consolidation, etc. or are looking for expansion as their datacenters are running out of space/power. Often these enterprises do not want to increase their capital expenditure. Provisioning infrastructure through a cloud based approach is seen as a more practical approach to address the needs as it removes the burden of having to provision the infrastructure.
· Cost Efficiencies: Companies are looking for serious cost cutting. Sloud computing is often seen as a way to reduce cost drastically. Whether it actually does depends on the utilization of the existing infrastructure and the level of virtualization already established. From a solely financial perspective, cloud computing, and in particular public cloud, does not demonstrate great savings when the infrastructure is used for a longer period of time. However, depending on the situation a cloud based approach may still make sense.
· Collaborative Environment: Companies want to make to increase their interactions with customers, partners & suppliers and build a more integrated eco-system. Making supporting applications available for all to use helps setting up the collaboration processes. But this requires these to be accessible from outside the firewall. Setting them up in a cloud environment is an easy and cost effective way to expose the applications while maintaining a certain level of security.
· Provisioning of Infrastructure: Companies aiming to be more responsive to the business and reduce “Shadow-IT” feel cloud resolves one of their headaches: the provisioning of infrastructure. Indeed, in many organizations this is a long and cumbersome process. They may transfer the application as-is in the new environment and then start adapt it for the new requirements. The same applies to enterprises looking at BYOD enabling some applications.
With all the above points considered, the first decision to make is to define the target environment in which the application needs to be hosted. This step is important as it will directly affect how much “lift-and-shift” is actually feasible. People tend to forget that cloud environments are typically rather standardized and that old versions of operating system, middleware and databases most often are not supported. Also, typical environments focus on the Windows or Linux operating system and no other. Understanding all of this is critical to ensure the appropriate decision is made. So now let’s address the next question.
How do I migrate my application to the cloud?
There are several approaches that can be used, but I will highlight the most important ones here. And I will use different terms rather than the generic “lift-and-shift”.
· Re-Host – Image Migration. This is the quickest way to get an application running in a new environment. A couple of tools exist. They take the physical or virtual environment (application, middleware & operating system) in which the application runs, encapsulate it, and host it in a virtual machine. The great advantage is that it’s quick and that it works with binary code. The disadvantage is that it does not always work (this depends on the middleware used), that it requires more CPU cycles due to the layers of software included and, once migrated, the application can no longer be updated. Indeed, you would need the old environment to rebuild the application and then redo the migration. However, such approach is interesting as a first step in case of urgency (Datacenter closure for example), or for applications that are close to retirement, but that still need migration. For applications where you do not have the source code and that have no newer version (old software packages for example), this might be the only option.
· Re-Host – Re-installation. Some tend to forget it, but the windows environment is rather good in terms of compatibility. Applications may actually work by simply re-installing them with the appropriate version of the middleware & database, in the new environment. It is not guaranteed to work, but often worth trying. Obviously, if it works, it’s really the easiest way. There is no overhead of encapsulation, no lock-down of the functionality and no reduction in speed.
· Re-Host – COTS upgrade. If the application is a software package, commonly known as a common off-the-shelf application, it’s worth finding out whether a version of the software exists for the new environment, and how, commercially, you can upgrade to that new version. In this case you will have a different type of migration to take care of, and that is the migration of your customization and of the data contained in the application. So, it does not mean there is no work to do, just that the work is different.
· Re-Host – Binary Migration. The last approach, which works on binary only, consists of taking the application and integrating it with the necessary middleware and Database software in a virtual application appliance (VAA). That VAA can now run in the new operating system. Once started, the application is automatically “paged” from the source code to the target (the application files, registry, configuration, data and environment). To my knowledge there is only one company delivering this, AppZero. According to Ben Kepes, AppZero only works with Windows to Windows migrations, alongside that moving shared databases will move all of the databases. It also doesn’t support LDAP network services, SharePoint or Exchange and security agents introduce some stumbling blocks into the process.
If none of the above work, you will need the source code to migrate the application. There are two additional Re-Host scenarios possible:
· Re-Host – Recompile. In some situations it may be enough to recompile the source code and install it in the new environment. That may work when the middleware and database versions do not need to be changed for the new environment. This implies the existing versions of those software do run on the new environment, which, dependent of the version jump, may be possible or not. The advantage of this approach is that it is fairly simple and leaves the application un-touched and available for future evolution.
· Re-Host – Source Code Modification. If nothing else has worked yet, it’s time to roll up the sleeves. You’ll need to do a triple upgrade: upgrade the operating system, the middleware and the database, all in one go. The first element to understand is the potential incompatibilities between the software versions. Most often this will require changes in API calls, and it is these calls you will have to find in the source code and adapt. An alternative scenario where this approach needs to be taken is when you migrate from UNIX to Linux. For example, you may have Solaris based applications you want to move to the cloud. Most clouds focus on the delivery of Windows and Linux, so you may have to migrate away from Solaris to Linux. You will have to review the system calls and update them to run smoothly in a Linux based cloud
And Docker in all that?
Although Linux containers have been around for quite a while, it is really the open source Docker project that has popularized the concept. Docker automates the deployment of applications inside software containers by providing an additional layer of abstraction and automation of operating-system-level virtualization. The advantage of containers is that they are extremely portable, so once your application is containerized, it is easy to migrate it from one environment to another. The issue is, however, to containerize your application. In a blog entry called “how to migrate legacy applications into docker containers”, Gary Paige describes the three steps required, de-structure, find reliable base images and manage configuration. This may seem easy, but many applications use configuration files, environment variables etc. These will have to be adapted to ensure the application is self-contained, or it will not run within the container. Again, the way the application is structured and how it is interacting with its environment needs to be reviewed prior such approach can be taken. Docker is great, but it is not the panacea either.
So, you said Lift-and-Shift…
I mentioned that the term lift-and-shift could be fuzzy. This term covers many, quite different, migration scenarios and it is up to us to identify which one actually makes sense. In this note I focused on the migration “as-is”, without re-architecting or re-tooling the application so it can scale-up or scale-down when needed for example. That’s a different ballgame, requiring a more in-depth intervention in the code.