Occasionally I welcome guests giving a real good perspective on one of the subjects treated on the CloudSourceBlog. Today I’m delighted to give Luc Vogeleer and Alexis Henry the floor for a series of 3 blogs dedicated to how to address mainframe workloads. I do hope you will enjoy their perspective as much as I did.
A Cloud Application is more than micro services running online transactions with NoSQL databases
When reading articles about application development in the cloud, we see a lot of reference to architectural principles related to cloud native and DevOps with the promises that it is going to solve every problem we have been facing since the internet was born.
While we perfectly agree these concepts are useful and that innovation brings solutions, we have also to recognize that deploying cloud applications is much more than micro services deployed in containers and running online transactions on top of NoSQL databases as per the initial Cloud Native definition.
Why so? … well we believe that the Cloud, especially when using Platform as a Service (PaaS) in the cloud, is a unique opportunity to promote smart architectures with enough details so that we can address larger issues rather than just promoting established Cloud Native and DevOps guidelines for deploying new applications.
Several articles are limiting themselves to drawing “gears” and icons with technology names on a white board … this does not make a solution. It just opens directions to explore. Naming a car brand does not bring me to any destination … Isn’t it?
Furthermore many existing communications focuses on operations such as deployment of containers or auto healing of non-responding micro services rather than diving into the inside of applications, in the source code, to promote detailed architectural principles which enables better application capabilities. For example, a mono thread process won’t scale by magic, a state full process even less. Similarly we have wrong expectations regarding NoSQL performance and scaling capabilities as NoSQL does not bring magical performance improvements with it without losing on transaction safety for instance.
In this context, micro services are seen as one of the Cloud strategies to enable. While enabling micro services definitely serves agility by simplifying the SOA principles, splitting an application into Docker deployed parts is not a strategy, nor a guaranty of success. It is only a valid and possible choice of implementation for a category of applications if the original code design lends itself to being split in components. We should avoid replicating the “failure” of SOA, confusing methodology, implementation and design. Indeed the confusion between the SOA principles and their design with its de facto implementation (web services), led us to improper application and architecture design. Furthermore design is not enough without governance. Compliance of implementation and design must be controlled and ensured at any time. Otherwise objectives will not be met and the gap will increase as application business rules evolve. Technology is about tools that serve the developer, it is not the solution. Smart design based on the right technologies is a solution.
All of this may sounds rude but too many people promote as cloud application development best practice the deployment of micro services in different container distributed in various network locations to address performance needs, while each micro service contains its own data storage not shared it with other services. Applying this to complex application will result in various performance and management issues.
For example, what happens if a business rule requires a network join? Latency is the first issue and requires more complex coding then using a central database, which may have its own performance issue as well.
Then comes the issue of self-deploying micro service and the configuration management issue of bringing those micros services back in the same Virtual Machine to avoid network latency when we let the application deploy itself… Quite an issue when the initial purpose is to avoid managing infrastructure. My point is not to criticize good technology, our point is to use it properly with enough details.
As Charles Babcock wrote:
“At the heart of “cloud-native” lie Linux, Linux containers, and the concept of applications assembled as micro services in containers. Indeed, the Linux Foundation launched the Cloud Native Computing Foundation. But cloud-native means a lot more than implementing Linux clusters and running containers. It’s a term that recognizes that getting software to work in the cloud requires a broad set of components that work together. It also requires an architecture that departs from traditional enterprise application design. “
Or as Gartner describes it:
“A key cloud computing trend in 2015 will be the development of native cloud applications. These apps will be usable on any computer or mobile device, regardless of operating system and will overcome limitations imposed by OS-based applications. If your company is implementing a BYOD (Bring Your Own Device) strategy or planning to do so, there is every reason to consider the advantages of native cloud apps.”
We would very much like to agree on both and make it happen. To achieve this we require to have smart, fast deployment of lightweight containers. But without proper architecture design your application code will remain impossible to distribute and will not support scaling, both vertical and horizontal.
So, we firmly believe it is all about design and architecture based on new technology, which starts by knowing technology capabilities, constraints and limits. Quite often people in our industry think that Cloud brings unlimited power and that new databases and micro service is all it takes to scale up indefinitely. This leads to impossible expectations and forgets to challenge what technology can really do and how proper designs are needed to reach the objectives.
It is not easy as we are building things we do not see operating. We do not see the electrons flowing in the computer, we do not see how the memory is controlled for every CPU cycle etc. … But, those are the raw materials that we, architects and software engineers, have to use properly even though they are hidden by abstraction layers… and the cloud is no exception.
Modern programming languages and architecture design methodologies do not allow to extend the limits of computer material. Instead it allow us to design more easily and more efficiently agile architectures and applications, with known scalability/performance/sustainability capabilities.
While the most innovative Cloud technologies have been designed to help Amazon, Facebook, Google and others reach their scalability and performance goals, your business may be very different from theirs and so you may be running processes and transactions that behave completely differently.
Indeed scoring a page, identifying your contacts or managing book stocks may not be your business. Typically you may run complex financial transactions that require consistency and integrity. You may want to read about microservices and transactions. You may even be running batch jobs.
A batch job, what is that? Would it be possible to enable cloud capabilities to help revive and enhance your legacy applications? Would PaaS have all it takes to revive Legacy batch applications to make business sustainable and agile? What are the target architecture designs?
In our next posts, we will start answering such questions and discuss interesting architectural options.
Next Post: Batch to the Future – Part 2
About the authors:
Alexis Henry, Chief Technology & Innovation Officer, Netfective technology
Alexis is the Global Lead for Innovation, Research and Development of Blu Age product suites.
His primary responsibilities is to design and lead implementation of disruptive technologies in the field Application Modernization, Cloud and Big Data. Alexis has over 20 years of experience within the IT industry, which helped him build a broad knowledge of the software and service industry.He has occupied various leadership positions, both in Europe and North America, to lead transformation projects or engineering teams for software vendors. Furthermore, Alexis is involved in R&D project founded by the European Commission (Horizon 2020 [DICE project], FP7 [REMICS project]).
Luc Vogeleer, Global Chief Technologist – Application Transformation, Hewlett Packard Enterprise
Luc is the Global Chief Technologist for Applications Transformation in HPE Enterprise Services. His primary responsibilities include research, development, and deployment of applications modernization and transformation strategies, technologies, methods, and tools focused on HP cloud offerings. Luc has over 34 years of experience within the IT industry. He joined Hewlett-Packard in 2000, where he has occupied various leadership positions in the service organization both at the European and worldwide levels.