Cloud Research Leader at Ericsson
How heterogenous hardware and edge computing redefine Cloud
The Cloud computing model is and was a key driver of digitalization during the last decade. Traditionally, Cloud is largely based on the principle of centralized pooling of a set of COTS computing resources, shared and made accessible to multiple remote tenants in a convenient and rapid fashion. Right now, however, new classes of applications start to pose much more stringent requirements on Cloud infrastructure, which led to two parallel tends that challenge this classical Cloud model: (i) Cloud resources are expanding to distributed locations, e.g., to the edge of network, thereby bridging the gap between resource constrained devices and distant but powerful cloud datacenters in order to meet performance, cost, or legal requirements; and (ii) traditional COTS HW is joined by a cheapened set of specialized HW resources (like GPUs, FPGAs, ASICs, or non-x86 CPU chipsets) in order to cope with the strict performance requirements of certain applications and the limited energy budget of remote sites.
As a result, Cloud deployments will have to be rethought, with Cloud computing resources becoming increasingly heterogenous, and at the same time being widely distributed to multiple smaller datacenter locations. In this talk, I want to discuss the challenges arising by this ongoing redefinition of Cloud. Examples of issues to deal with include virtualization techniques for the specialized HW resources; HW-aware orchestration and optimized workload placement; and SW abstractions and intent-based programming models to take advantage of these new possibilities.