Why we are so interested in High-Performance Computing in the Enterprise, and you should be too

By , Sunday, November 5th 2017

Categories: Analyst Blogs

Tags: AI, deep learning, Frederic Van Haren, HPC, machine learning,

machine_learningMost recently we added Frederic Van Haren to our team of analysts to cover a very big space – HPC/ AI / Deep Learning / Machine Learning — call it what you will.  We’ve been known as the super technical guys in information management but we’re branching out—again.  HPC has been dominated by CPU’s, now GPU’s with 1U servers bought mostly by the big PhD-dominated research labs.  Not exactly the typical folks we deal with.

Well that has changed.  AI’s various disciplines are entering every phase of our life and the life of the enterprise.  As such, enterprise IT is now adding HPC-lite or -heavy systems to their traditional environments.  While this is a HUGE movement (no reference to our politics, please), but adoption is also slow and complicated.  HPC storage environments like Spectrum Scale, Lustre, Gluster and HDFS are not that easy to deploy and maintain.  Integrating them into production data centers is complicated. But when use cases take off, they can grow big quickly.  That is why we are seeing more inquiries and desire for knowledge from traditional IT. They are still the ones in the data center managing transactional and customer data—now tasked to work with the guys in the basement making up these new systems.  Oh, and did I mention their increasing responsibilities relating to the use of the Public Cloud for additional compute and data layers?

I personally know enough to be very dangerous, as I really date back to when an IBM mainframe was used to feed the Cray processors.  The issues include scaling, data consistency around high performance processing and integration into production with the need to deliver the analysis in real time.

Why you should care? If you are a systems or storage vendor, this is the next big thing. If you are running traditional IT environments, this will show up on your door step.  Adoption will be slow as industries learn where HPC will give them a competitive edge and what the required investments in infrastructure and people look like.  It will be messy as well. Data scientists will become system architects. The LOBs will control budgets. Neither may know what is required for scale. Every CEO and CIO will be asked, what is your AI strategy?  If they don’t have one, a new person will be in place very soon.

What systems technologies that they will need?  GPU, CPU, Networks (IP, IB, satellite), large scale file systems, content repositories (object, public cloud, tape), solid state (arrays and in the server), edge devices including the buzz ‘micro-data center.”  System vendors will need to know how to simplify. IT users will need strategies. We plan to bring our expertise on systems and data, adding Frederic Van Haren, with his deep background in managing large-scale HPC and Big Data systems, to the mix.  We hope to illuminate how and what is needed to get to productive results, what to avoid and what to consider.

Forgot your password? Reset it here.