It’s a couple of months since the workshop and plenty of time to let the dust settle and reflect on the content. You can find most of the presentations from the workshop if you look follow the links from programme.
As I mentioned in my introduction at the meeting, I’ve noticed a transition over the past year in the adoption and application of cloud and this evident in the abstracts submitted for this meeting. There are signs of a maturing – in the first couple of annual workshops we held, cloud usage was very much at the experimental stage with early forays into private cloud deployment and first pilots testing out public capability. This year there were good examples of sophisticated application of cloud technology whether cloud-native applications like Chris Woods’ – use of serverless to dynamically trigger provision of clusters for batch computing – or in-depth demos of DevOps tooling from StackHPC and others.
Late last year, the Cloud WG ran a smaller technical meeting with no formal agenda – in ‘unconference’ style. This gave us an opportunity to do more of a deep dive with DevOps technologies. The positive feedback we received reflected the value in networking and learning together with peers. There was something of this continued at this year’s workshop with the afternoon demo session. It was great to have this in-depth technical input alongside higher level presentations, whether overviews of projects or talks around challenge areas such as policy. João Fernandes shared about the OCRE project in his presentation. This builds on the work of the GÉANT IaaS Framework, important for the establishment of agreement with public cloud providers for access to their resources for the research community.
On the topic of policy, the debate continued around the relative merits of public cloud versus on-premise hosting. Cliff Addison (University of Liverpool) highlighted the tensions between quantifying benefits, budgeting at scale and maintaining portability between cloud vendors. Owen Thomas (Red Oak Consulting) challenged assumptions with traditional HPC provision and made the case for assessment of overall value not just cost when making comparisons with public cloud. Andrew Jones (NAG) argued against absolutes when considering the complexities in making choices for hosting for any given application. Migration to cloud can present enormous challenges as Tony Wildish’s presentation illustrated. He provided a walkthrough of different approaches for migration legacy code developed for on-premise to operate efficiently on cloud drawn from EMBL-EBI’s experiences. Elsewhere in the meeting HEPCloud and UKAEA presentations show how hybrid models can be built up to select the required computing resources from on-premise and public cloud resources. HEPCloud in particular, illustrating the benefit of public cloud to overspill from research infrastructure in order to meet peaks in demand.
CRC Canada is an example of a complete public cloud solution architected from the ground up. What is interesting here is the organisational and culture shifts needed to support that model. In particular, the set up of dedicated effort for auditing and accounting when moving to a consumption based approach to billing. Pangeo – presented by the Met Office Informatics Lab – demonstrates another cloud enabled solution but what is of interest is the formation of a collaboration bringing together open source solutions to make a platform that is cloud-ready. At its core is a virtual research environment built largely on Jupyter and Dask together with use of Kubernetes and deployment glue to make it cloud-agnostic. This kind of solution fits for data analytics where typically datasets have been imported into a cloud environment and manipulated into a form that is analysis-ready. Use of BinderHub – shown with Pangeo and Sarah Gibson’s demo (Turing) – allows infrastructure to be dynamically provisioned and scientific workflows conveniently shared via Jupyter Notebooks.
In general though, examples of long-term hosting of large volumes of research data on public cloud however are still absent. If there’s a pattern from the sample of submissions for the workshop, it’s one of use of public cloud for compute rather than data storage: continued use of on-premise for long-term hosting of data with some bursting to public cloud for batching computing. Cloud is utilised as a means to obtain an ephemeral computational resource: set-up an environment, stage data, perform calculation, get results and tear down again. Even so, there appeared to be an increased awareness of the challenges of data hosting with cloud in some of the questions and discussion in the sessions. These included issues around hybrid and public cloud and multi-cloud scenarios. For example, if data is hosted in one cloud, how can it be accessed readily by a client with an existing presence hosted on another cloud? There are definite signs of progress in the community but clearly there are still big challenges for cloud to be more fully utilised for research workloads.
Thiis was lovely to read
LikeLike