This week I have the opportunity to participate in a client workshop on DevOps and public cloud. How do we get traditional applications and agile/DevOps approaches working together. As older operating systems are getting out of support, as workloads are migrated to cloud, transforming applications and modernizing them so they use up-to date middleware, database and OS versions is increasingly becoming core and central to the work of IT departments. For many years companies have avoided to modernize some of their environments. I regularly encounter CIOs telling me 40-60% of their applications still run on Windows 2003 for example. I even encounter Windows 2000 and very old versions of Linux. With that comes mostly outdated middleware and database versions. This urgently requires migrations to ensure appropriate enterprise security levels. It’s interesting to notice most of those environments have no support contracts either, so it’s leaving enterprises exposed to security breaches.
Often the reason for not migrating is that enterprises do not see a way to justify the cost and effort required to migrate. Unfortunately, the older the environments have become, the more costly it is to migrate them to the current standards. Sure, a number of those applications may be made redundant, but often I’m told they were flagged for decommissioning years ago and are still there.
Why migrate them, why migrate them now. Let’s look at this in more details.
The world is quickly becoming digital
First, business is quickly becoming digitally enabled, so business processes should call upon “services” in the cloud sense of the term, to deliver the functionality required. These services are often available in the enterprise legacy applications but not accessible from outside it. So you really need to do two things, first to migrate the application to a modern and supported environment and then expose the required functionality.
Ask yourself whether you are ready for the digital economy. What could it bring to you?
- Would it make sense to increase your interactions with your eco-system through digital collaboration and information exchange? Whether you’re running a supply chain or an online business, how do you increase the engagement of your customers and partners?
- Can the digitalization of your business processes help your business teams to respond faster to the market, or to change the customer experience? Can it help them to get the required information faster so they can be more productive?
- Can IT enable new businesses? In particular, thinking about the “uberized” economy, do you want to be a service provider, a service consumer or a clearing house? Do you want to start a new business or extend your existing one by providing new interaction capabilities to your customers and users?
- Last but not least, what data can you gather and how can that information help you improve your understanding of your enterprise, marketplace and competition?
As you can see, when you are looking to migrate to the digital enterprise, take a holistic view at what you could do with it. And then, define the digital services you will need to support the processes.
Build or re-use
And now comes the question. To take full advantage of the cloud and built the responsiveness your business teams may feel they require, you may want to create those services from scratch. But that would mean the recreation of a lot of existing functionality, the re-use of databases and the linkage with the existing applications. Most of the time you are not given the time to do such work as the business requires these new services quickly.
This leaves you with one option, exposing existing functionality and make it available to be called upon by the digital business processes. This leads to the application transformation question I discussed quite often in this blog. Prior to choose an approach a couple simple questions need to be answered:
- What demand will there be on the exposed service and is that demand pattern significantly different from the current one? Also is it predictable or do I need to expect large variations in the demand?
- If the demand is stable and predictable, will the functionality of the service vary over time, in other words, do I need to be able to adapt the service? Obviously that implies I have the source code of the associated application and am able to adapt that one.
- Is the exposure of the service a temporary fix and you will redevelop the functionality using an aPaaS environment and proper cloud technology?
These questions will give you the opportunity to identify the approach to be taken for migrating the service and the associated application to the cloud.
- If the demand is stable and predictable and the service will not vary over time, or is a temporary fix, you can use binary migration methods. In those you take the existing binary and migrate it as-is to the new environment. However be aware that, depending on the age of the application, this may or may not be feasible. Indeed, older applications often run middleware that is not compatible with modern operating systems. Be aware though that you only worked on the binary and as such, froze the functionality of your application.
- If the demand is stable, but the application needs to be changeable, you will need to work on the source code. In other words, you will need to make your source code compatible with the new environment through recompilation or by adapting incompatible APIs. This is more work, but makes your application compatible with the target environment in which it will operate moving forward.
- If the demand is not stable, you will have to enable the application to use cloud features such as scale-up/scale-down and load balancing. This requires it to be SOA compliant, in other words to comprise modules that stand on their own and communicate with the rest of the functionality through a message bus. These modules can then be cloned in case of high demand. We call this re-factor. However, if the technology used is so old that it becomes impossible to build such modules, you may want to take an even more drastic approach, extract the business logic from the application and recreate a new one after having cleaned the logic. That’s what we call re-architect.
Don’t forget that, to expose the service, you will need to make APIs available that can be called to invoke the functionality. Even in case of binary migrations those will need to be available for the service to be accessible.
Whether you re-host, re-factor or re-architect you will need to install the applications several times in the process. Why not automate that activity?
Automation, a way to DevOps enable your applications
You may alter the application source code, you may recreate it. In both cases you will want to properly test the functionality and make sure it goes through testing, QA and staging. This means you are planning to install the application at multiple occasions, why not automate that process using DevOps tools and technologies. Take advantage of service enabling your applications to also DevOps enable them. When we are asked to do the job, that’s what we do. We build the appropriate topologies for these applications, using pre-defined topologies (we call them CloudMaps) for middleware and database installation. This brings with it a huge benefit. Indeed, through the use of these CloudMaps we standardize the installation of the platform tools facilitating their management.
Beyond DevOps, shouldn’t we go for BusDevOps
In a digital world, IT is underpinning every action. The boundaries between business and IT are disappearing. The arrival in the workforce of new generations makes this even more obvious. We’re starting to have development and operations teams working together to deliver functionality faster, with higher quality levels. Shouldn’t we go further? Shouldn’t we break down the barrier between business and IT, allowing business teams to participate in the definition and creation of the services in support of the new, digitally enabled, business processes? What will you do to make this a reality in your enterprise?
This article was originally published on the CloudSource blog in November 2015