A couple of days ago, one of my colleagues asked me where I believed IT will be in 10 years from now. This is actually a difficult exercise as game-changing innovations cannot easily be predicted. Here’s what I believe will happen, knowing what I know today – starting with a couple of personal observations:

  • IT creates new ideas, recycles existing ones (changing their name) and never deletes anything.
  • When a technology matures, IT people tend to evolve towards open technologies.
  • IT is becoming increasingly complex, but users want to be shielded from that complexity.
  • Never forget Geoffrey Moore’s version of the Technology Adoption LifeCycle described in “Crossing the Chasm.” New technologies are quickly adopted by innovators and early adopters, but then it takes time for the majority to take them on board. That’s the Chasm. To understand where technology goes, look at what those early adopters are interested in.

That being said, where is IT going?

Sorry, but there are a couple physical limits

Electrons. For the last five decades we have been able to evolve the Von Neumann Architecture using more powerful CPUs. Moore’s law states that the number of transistors per square inch on integrated circuits has doubled every two years since the integrated circuit was invented – and up until 2015 this has proven correct. Chip densities have increased dramatically with traces as small as 16 nanometres for the most powerful ones. However, there are some physical limits. The electron, as small as it is, has a size and due to the fact we cannot identify exactly where an electron is at any moment in time, we need a minimal amount of them to make sure a logic gate is opened or closed. If I remember well, at 16 nanometres, the average is 8 electrons. Slowly but surely we are reaching a limit where there will no longer be enough electrons for us to be sure a gate is closed or opened when a signal passes. Indeed, the smaller the traces, the less electrons going through them.

Fiber and Light. From a communication perspective, the advent of fiber optics has increased capabilities incredibly. At the same time, the amount of data we need to transport has evolved exponentially. By laying more fiber we have been able to address the demand. But can we continue laying fiber as quickly as the demand increases? And light has a speed that nobody has been able to exceed. So, there again, we have a limit.

It’s time to rethink the way our IT is structured.

Large, central IT pools – an approach from the past?

Cloud computing, the latest trend in IT’s history, groups massive amounts of infrastructure in one location and pushes users to consume that capacity from anywhere in the world. By making the actual hardware invisible (through virtualization, parallelization, scale-up/scale-down) the first physical limit, chip density, becomes irrelevant, but the second one, speed of light, remains. Sure, AWS (Amazon Web Services), for example, has locations around the globe. However, 70% of their servers are located in the Eastern US. They are fundamentally recreating a gigantic mainframe. Is that the future? I don’t believe so. It at least looks like a bigger version of the past. As big data expands and the Internet of Things takes off, centralizing the processing of information forces massive movements of data. That is the Achilles heel of this approach.

Why not process data where it is generated?

An alternative would be to process data where it is generated, so we only move information across, not the actual data. The current concept of data-lakes demonstrates that other approaches are possible. Keeping the data local and distributing the analysis is an option, but requires IT capabilities to be distributed rather than centralized. This is probably closer to tomorrow’s IT world, as it will be data driven. Such approach reduces the amount of data that migrates across the internet.

It implies a different way of approaching IT. Rather than building large datacenters, millions of sources of information and functionality will be offered. This opens the opportunity of brand new businesses making information available to others and doing it as a service. Innovation will have to take place in two areas, first security and second the financial model. How is the environment secured while calling up on all those services, and how are providers funded?

From general purpose to specialized servers.

Most servers in use today are general purpose servers. They run multiple types of workloads. But if we have servers associated with data-lakes that only analyze data, maybe we should go for different architectures, specialized in data analysis. Actually, with HP Moonshot we see early glimpses of such approaches. Google and Facebook use servers that are stripped of all components they don’t need. They are focused on doing one task, doing it well and quickly. Moving from general purpose to specialized servers reduces hardware costs and energy consumption.

But there is more. The current layered storage architecture has CPUs spending 70 to 80% of their time shifting data from one storage layer to the next one. This was important when fast access data storage was expensive. Flash memory and newer technologies such as Memristors, allow the storage of very large amounts of information with an access speed close to the current memory ones. So, do we still need to differentiate between memory and storage? In-memory databases already remove that separation.

This is precisely what HP’s recent announcement, called ‘The Machine” addresses. By combining CPU power and simplified storage (all in-memory), along with photonic interconnects, HP hopes to make a quantum leap forward in the actual working power of next generation IT systems without having to break the current physical boundaries. Combine this with special purpose architectures and you have a complete new IT world.

Let’s complement this with micro-services

Nearly 20 years ago the team I was part of at HP developed a system for an insurance company and a bank that separated elementary transactions from business processes. We used object oriented technology to create independent objects that were called by a workflow management system for which the business processes were designed. Fundamentally we did what Larry Ellison did 20 years earlier at Oracle. He separated data and code; we separated transactions and processes. It worked, but frankly we were too early. Our computers did not have the power to run large systems like that. If we fast-forward to today and look at the virtualization and cloud technologies available, such an approach becomes perfectly viable. We create micro-services that expose themselves. And we allow our business users to design their processes in a graphical tool. They consume the micro-services at the step they require.

Micro-services may run on specialized servers that are optimized for their workload. A business process management environment calls upon them when required. Companies could make a living by offering micro-services, so maybe we move from SaaS to MSaaS (Micro-services as a Service), allowing enterprises to create their differentiation through the combination of multiple services in a particular way. Here again a whole new industry could be created.

From an end-user perspective, they would access the functionality offered by the enterprise IT through a portal. That portal would probably run somewhere in a cloud environment. A company might specialize in the offering of enterprise IT portals. End-users would choose the business process they want to execute, putting many connections in motion. The business process management system would call upon the micro-services required, wherever they physically are. Each of them would probably have their own address and metadata, so they could be accessed transparently to the user. There would no longer be any central IT; everything would operate around a portal, a business process management system and a repository of available micro-services.

For this to be feasible, standards have to be established. Those standards would define how micro-services expose themselves, how they interact, how generated data is stored and shared, etc. We would want standard APIs to facilitate the interactions between the players in the eco-system and avoid lock-in at the portal and process designer level. And we’ll need a lot of transparency to ensure security is guaranteed in such an environment.

Don’t bet on virtualization

For the last 10 years we have reinvented the mainframe virtualization approach to shield applications from each other. Virtualization brings a lot of overhead and may be rather costly from a licensing perspective. But lately another technology has been revived – the Linux container technology. Docker, the Open Container Initiative, the OpenStack Container Team and several other initiatives demonstrate containers are probably the future. Yes, they do not allow running multiple operating systems on the same server, but they are lightweight and standards are emerging. Security may not be at the level required yet, but things are moving fast.

Now think about those new servers, think about running containers on top of these. Suddenly a lot of technology battles seem irrelevant. Migration from one environment to another becomes easier. Could lock-in be something of the past?

Welcome to TomorrowLand?

In IT we have this wonderful capability to continuously invent new things, but we never kill anything. We still have mainframes, Cobol and RPG. So, we will have dedicated infrastructures for quite a while. Centralized cloud environments, as we know them today, will exist as their scale reduces costs of running today’s workload. And today’s workloads will exist for a long time. But ultimately we are inevitably headed toward an environment that is completely distributed and in which data is processed where it is generated. The IT environment will increasingly become more complex, but the good news is, it will no longer be your problem. You will consume services and pay for your consumption. How they are delivered is not only no longer your problem, it is also someone else’s business opportunity.