The paradigm shift in enterprise computing 10 years from now.

Print Friendly

The way businesses arrange their IT infrastructure is based based upon 3 things: Compute, Networks and Storage. Two of these have had a remarkable shift in the way they operate over the last decade. The keyword here was virtualization. Both Compute and Networking have been torn apart and put together in a totally different way we were used to from the 70 to the early 2000’s. Virtual Machines and overlay networks have contributed to around 97% change in IT infrastructure design, operations and management. A similar shift storage had gone through back in the late 90’s early 2000’s when the majority of direct attached storage was consolidated into Storage Area Networks or SANs. Companies like EMC, NetApp, HDS, IBM and HP created a huge amount of equipment each filled with a ton of features and functions which allowed businesses to think different about their most valuable asset: DATA.

The advance of the massive content generation by people and the expected exponential growth of this data will once again created a land-slide of change not only in the in the storage world but also in the way companies will have to deal with transforming this data into useful information and therefore have to adjust their operations based on shifting business requirements. This means that it is very hard to adjust the logical and physical infrastructure as quickly as the business demands. You may think that currently cloud-solutions from AWS, Rackspace, Microsoft and HP are already doing this but that is incorrect. They come very close when you look with a consumer hat on but under the hood each territory, Compute, Networking and Storage, still has strict boundaries and are managed separately. This means it is very hard to adopt shifting workloads onto the appropriate infrastructure. the boundaries are set per virtual machine instead of business application.

The drive to more effective utilisation and operational dynamics will lead to an ever more virtualized datacentre. The advances in technology will make it happen that future datacentres will be configured in a grid infrastructure where application interaction will occur with DMA/RDMA kind of protocols over direct memory interconnects. This will remove any form of transport latency caused by relatively slow networks and protocols with cumbersome operational and technical restrictions. Device accessibility like storage will evolve to massive parallelism with NVMe+ like infrastructures which allows hyper-dynamics in workload shifts and therefore business operations. The cost-effectiveness will be optimised by an order of magnitude as infrastructure optimisation can be achieved very quickly.

All this will be operated and managed by the business applications themselves. The containerisation (for a lack of better words) will not only be restricted to applications but will extend to the requirements of business units and organisational entities. The business container will include the applications in addition to the requirements and policies for the physical infrastructure to adapt to. The flexibility of moving such units around on other infrastructures will therefore be increased significantly. The infrastructures themselves therefore can be adapted much more quickly then is seen currently even with the fairly high level of virtualization seen in today’s datacentres. Networks will move to NSX/Vyatta/OpenVSwitch (you name them) sorts of software layers as will storage. Distributed object-based architectures will hook into these containers providing them the functions they need. The business containerisation will include the requirements for these objects and hook into the capabilities of existing infrastructures.

Obviously this will change infrastructures massively. All hardware will become a commodity in the end. Specialised network and storage equipment will fade away and the adaptability of software will survive.




Companies like GE, Intel, Hitachi, Samsung and QualComm will play a massive role in this as the inception of infrastructure I described above is heavily depending on the advances being made by these and similar hardware vendors. Price erosion caused by commoditization , development advances in CPU, memory, interconnects and new protocols will show an exponential increase in adoption rates.

Software vendors who now play a large role in Operating Systems will see it fade away entirely. These OS’es will be commoditized in the same way as hardware. Compute, Network and storage entities will become uniform “bricks” and optimised to the largest common denominator. This will severely increase cost effectiveness as it is much cheaper to put in a few more bricks at a known level of capabilities then to differentiate and try to optimise a few of these bricks for a very specific task. This increases complexity, shift operational flexibility and also requires a much higher skill level and therefore more expensive staff.

The way the development of future datacentres will evolve is driven more by necessity and market demand than the entrepreneurial geekness of a few bright minds. Current technologies already see limitations in scalability, both up and out, and therefore lack the dynamic characteristics required by companies.

The technological and commercial advances in the IT industry will lead to a very diverse workplace with lots of opportunities for employers, employees and students. Chances are very high you will end up in a job-role which doesn’t exist today. It will be a very interesting joy-ride the next decade. Start learning and exploring today and be ready for what’s coming.

Comments and feedback is appreciated.

Regards

Erwin

About Erwin van Londen

Master Technical Analyst at Hitachi Data Systems
General Info , , ,

Comments are closed.