UKRI Cloud Workshop 2022: Call for Participation in Organising Committee

The UKRI Cloud Working Group is pleased to announce that we will be hosting the 6th annual UKRI Cloud Workshop at the Francis Crick Institute in London on the 29th March 2022. 

The meeting provides an opportunity for UK researchers and representatives from industry to come together and share best practice and new insights in the application of cloud computing for academic research. Past events have attracted speakers from a range of high-profile organisations including CERN, the UK MetOffice, major UK research infrastructure providers and major public cloud providers and typically attracts between 150-180 attendees.

For the event this coming year, we are extending an invitation to members of the research community and commercial companies to take part in an Organising Committee to run the workshop. Participation in this group will provide an excellent opportunity to gain insights into how cloud is being applied – from the innovative application of technologies to address research questions to addressing practical challenges around policy and use. We would like to encourage participation from a diverse set of backgrounds: you may have experience in aspects of cloud or have been involved in running events before or you may simply have an interest and wish to get involved.

As a member of the committee you will help shape the themes, make a call for abstracts and select submissions for presentations. The group will need to self-organise, coordinate meetings and work closely with the UKRI Cloud Working Group (see: https://cloud.ac.uk/membership/) towards the successful delivery of the workshop in March ‘22.

The meeting typically consists of one day of presentations and workshops in two tracks. Previous example can be found at : https://cloud.ac.uk/ukri-cloud-workshop-2020-call-for-participation/

We hope to host the workshop in-person, however there may be some element of hybrid, virtual conference or social distancing needed. Planning for multiple eventualities will be needed to ensure the event operates in line with government guidelines and provides opportunities to support remote speakers and attendees. The conference venue is set up to support these different hosting scenarios.

Thoughts from Cloud Workshop 2019

It’s a couple of months since the workshop and plenty of time to let the dust settle and reflect on the content. You can find most of the presentations from the workshop if you look follow the links from programme.

As I mentioned in my introduction at the meeting, I’ve noticed a transition over the past year in the adoption and application of cloud and this evident in the abstracts submitted for this meeting.  There are signs of a maturing – in the first couple of annual workshops we held, cloud usage was very much at the experimental stage with early forays into private cloud deployment and first pilots testing out public capability.   This year there were good examples of sophisticated application of cloud technology whether cloud-native applications like Chris Woods’ – use of serverless to dynamically trigger provision of clusters for batch computing – or in-depth demos of DevOps tooling from StackHPC and others.  

Late last year, the Cloud WG ran a smaller technical meeting with no formal agenda – in ‘unconference’ style.  This gave us an opportunity to do more of a deep dive with DevOps technologies.  The positive feedback we received reflected the value in networking and learning together with peers.  There was something of this continued at this year’s workshop with the afternoon demo session.  It was great to have this in-depth technical input alongside higher level presentations, whether overviews of projects or talks around challenge areas such as policy.  João Fernandes shared about the OCRE project in his presentation.  This builds on the work of the GÉANT IaaS Framework, important for the establishment of agreement with public cloud providers for access to their resources for the research community.  
On the topic of policy, the debate continued around the relative merits of public cloud versus on-premise hosting.   Cliff Addison (University of Liverpool) highlighted the tensions between quantifying benefits, budgeting at scale and maintaining portability between cloud vendors.   Owen Thomas (Red Oak Consulting) challenged assumptions with traditional HPC provision and made the case for assessment of overall value not just cost when making comparisons with public cloud.  Andrew Jones (NAG) argued against absolutes when considering the complexities in making choices for hosting for any given application.   Migration to cloud can present enormous challenges as Tony Wildish’s presentation illustrated.  He provided a walkthrough of different approaches for migration legacy code developed for on-premise to operate efficiently on cloud drawn from EMBL-EBI’s experiences.  Elsewhere in the meeting HEPCloud and UKAEA presentations show how hybrid models can be built up to select the required computing resources from on-premise and public cloud resources.  HEPCloud in particular, illustrating the benefit of public cloud to overspill from research infrastructure in order to meet peaks in demand.  

CRC Canada is an example of a complete public cloud solution architected from the ground up.   What is interesting here is the organisational and culture shifts needed to support that model.  In particular, the set up of dedicated effort for auditing and accounting when moving to a consumption based approach to billing.  Pangeo – presented by the Met Office Informatics Lab – demonstrates another cloud enabled solution but what is of interest is the formation of a collaboration bringing together open source solutions to make a platform that is cloud-ready.  At its core is a virtual research environment built largely on Jupyter and Dask together with use of Kubernetes and deployment glue to make it cloud-agnostic.  This kind of solution fits for data analytics where typically datasets have been imported into a cloud environment and manipulated into a form that is analysis-ready.  Use of BinderHub – shown with Pangeo and Sarah Gibson’s demo (Turing) – allows infrastructure to be dynamically provisioned and scientific workflows conveniently shared via Jupyter Notebooks.    

In general though, examples of long-term hosting of large volumes of research data on public cloud however are still absent.  If there’s a pattern from the sample of submissions for the workshop, it’s one of use of public cloud for compute rather than data storage: continued use of on-premise for long-term hosting of data with some bursting to public cloud for batching computing.  Cloud is utilised as a means to obtain an ephemeral computational resource: set-up an environment, stage data, perform calculation, get results and tear down again.  Even so, there appeared to be an increased awareness of the challenges of data hosting with cloud in some of the questions and discussion in the sessions.  These included issues around hybrid and public cloud and multi-cloud scenarios.  For example, if data is hosted in one cloud, how can it be accessed readily by a client with an existing presence hosted on another cloud?   There are definite signs of progress in the community but clearly there are still big challenges for cloud to be more fully utilised for research workloads.

Technical Workshop, 20 November

In addition to our main annual workshop in February next year, we’re also running a smaller pre-meeting this coming month in central London.   The goal of this event is to provide a space specifically for developers, researchers and devops to take a deep dive into technologies for cloud, share from their own experience and learning from each other.  We’ve deliberately avoided setting a fixed timetable so that we can source topics from attendees on the day.  More details and booking information for the day here:

http://bit.ly/register-cloud-wg-tech-2018

Make sure to bring your laptop 🙂

Save the date 12 Feb 2019 – next Cloud Workshop

We will be holding our 4th annual workshop early next year on the 12th February 2019.  We’re pleased to be back at our familiar venue the Francis Crick Institute in central London.   Please save the date!

In past years we’ve had a great set of speakers from public cloud companies and major research institutes to individual researchers reporting on how they are exploiting cloud computing to meet their research goals.  More details to follow soon.

RCUK Cloud Workshop 2018

The workshop is now just under a few days away.   You can see the programme for the day below.   We have a broad range of contributions from across the research community and also good representation from public cloud providers.  This year we are focussing on international collaborations for our plenary session.   Other sessions focus on mix of application use – from where cloud adoption has reached a mature state – to others where we are examining specific technical and policy related challenges to be addressed.

Programme

8th January, Francis Crick Institute London, 1 Midland Road, London, NW1 1AT

09:00 Arrivals, registration, refreshments (Gallery Area)
09:30 Introduction

(Auditorium 2)
Philip Kershaw, RCUK Cloud WG Chair

09:45 Session 1 – International Collaborations

(Auditorium 2)
Chair: Steven Newhouse

Future Science on Future OpenStack: developing next generation infrastructure at CERN and SKA – Stig Telfer, StackHPC
EOSC-hub: overview and cloud federation activities – Enol Fernández, EGI
Public Clouds, OpenStack and Federation – Ildikó Vancsa, OpenStack Foundation
Question time
10:45 Break (Gallery area)
11:15 Session 2a – Technical Challenges – Containers, portability of compute, data movement

(Auditorium 2)
Chair: Adam Huffman

Session 2b – Practical challenges

 

(Auditorium 1)
Chair: Martin Hamilton

Running a Container service with OpenStack/Magnum – Spiros Trigazis, CERN Aerospace and Cloud – Leigh Lapworth, Rolls Royce
Large scale Genomics with Nextflow and AWS Batch – Paolo Di Tommaso, Centre for Genomic Regulation; Brendan Bouffler, AWS Processing patient identifiable data in the cloud – what you need to consider technically and process wise to keep your data safe – Peter Rossi, UKCloud
Best practice in porting applications to Cloud – Dario Vianello, EMBL-EBI Jisc ExpressRoute Circuit Service, David Salmon and Gary Blake, Jisc
Demystifying Hybrid Cloud with Microsoft Azure – Mike Kiernan, Microsoft The Janet End-to-End Performance Initiative – Duncan Rand, Jisc
Question time Question time
12:30 Lunch (Gallery area)
13:30 Session 3a Innovative applications, usability and training

(Auditorium 2)
Chair: Steve Hindmarsh

Session 3b – Virtual Laboratories and Research Environments

(Auditorium 1)
Chair: Philip Kershaw

Breakout session
(Seminar room)
Visualizing Urban IoT data using Cloud Supercomputing – Nick Holliman, Newcastle University CLIMB – Thomas Connor, Cardiff University / Nick Loman, Birmingham University ResOps training – Erik van den Bergh, EMBL-EBI
Accelerate time-to-insight with a serverless big data platform – Hatem Nawar, Google Cloud CyVerse UK: a Cloud Cyberinfrastructure for life science – Alice Minotto, Earlham Institute
Azure at the Turing – Martin O’Reilly, Turing Institute EBI Cloud Portal – Jose Dianes, EMBL-EBI
HPC – There’s plenty of room at the bottom – Mike Croucher, University of Sheffield Data Labs: A Collaborative Analysis Platform for Environmental Research – Nick Cook / Josh Foster, Tessella
Question time Question time
14:45 Break (Gallery area)
15:15 Session 4a – Technical Challenges – batch compute on cloud

(Auditorium 2)
Chair: David Colling

Session 4b – Technical Challenges – Storage

(Auditorium 1)
Chair: Simon Thompson

Matching to cloud technologies to Theoretical Astrophysics and Particle Physics applications  – Jeremy Yates, UCL Semantic Storage of Climate Data on Object Store – Neil Massey, NCAS / Centre for Environmental Data Analysis, STFC
Hybrid HPC – on-premise and cloud – Wil Mayers, Alces Flight Accessing S3 from FUSE – Jacob Tomlinson, Informatics Lab
Running HPC Workloads on AWS using Alces Flight – Igor Kozin, ICR OpenStack Manila – John Garbutt, StackHPC
OpenFOAM batch compute on AWS – James Shaw, Reading University Providing Lustre access from OpenStack – Thomas Stewart / Francesco Gianoccaro, Public Health England
Implementing medical image processing platform using OpenStack and Lustre – Wojciek Turek, Cambridge University
Question time Question time
16:30
16:35 Final Plenary
(Auditorium 2)
Feedback, next steps, cloud strategy for research community, sum-up
17:00 Reception (Gallery area)
18:00 Close

 

Cloud Workshop

The Francis Crick Institute

It’s been a few months since our November workshop so there’s been some time to digest and reflect on some of the common themes emerging. Having attended a couple of other conferences and workshops from my community (AGU and AMS) it’s interesting to compare.

Firstly, it was great to see such a variety of application areas represented. For this our second annual workshop, we opened it for the submission of abstracts and this made such a difference.  There was a great response, Life sciences having the margin on other domain areas.  We had 160 register and 120 on the day.   It was fantastic to have the Crick as a venue.  It worked really well.

The first session looked at applications of hybrid and public cloud.   Two really interesting use cases (Edinburgh and NCAS, NERC) looked at trying out HPC workloads on public cloud.  This raised issues around comparative performance and costs between public cloud and on-prem HPC facilities.

On AWS, Placement Groups allow instances to put close to one another to improve inter-node communication for MPI-based workloads.   This showed comparable performance with Archer (UK national supercomputer) for smaller workloads but clearly there was some limit as this tailed off as the number of nodes increased whereas Archer performance continued linearly with increase in scale. This tallies with what I’ve seen anecdotally at the AMS conference where there seemed to be on the one hand increasing uptake of public cloud for use with Numerical Weather Prediction jobs (which need MPI). However, this seemed to be being done for smaller scale workloads where they can stay within the envelope of the node affinity features available.

Another theme was portability – what kind of approaches can be used engineer workloads so that they can be easily moved between providers. Andrew Lahiff from STFC, presented a very different use case showing how container technologies can be used to run for Particle Physics, cases where there the focus is high-throughput rather than HPC requirements and so much more amenable to cloud.  This work has been done a part of a pilot for the Cloud Working Group to specifically investigate how containers and container orchestration technology can be used to provide an abstraction layer for cloud interoperability.  A really nice slide showed Kubernetes clusters running on Azure and Google cloud managed from the same command line console app.  Dario Vianello’s talk (EMBL-EBI) showed how an alternative approach using a combination of Ansible and Terraform can be used to deploy workloads across multiple clouds.

Microsoft’s Kenji Takeda presents on recent Azure developments

It was great to have talks from hyper-scale cloud providers AWS, Azure and Google.  The scale in hyper-scale is as ever impressive as is the pace of change in technology:  very interesting to see Deep Learning applications driving the development of custom hardware – TPUs and FPGAs. Plans underway to host data centres in the UK will ease uptake. OpenStack Foundation and Andy McNab‘s talks showed examples of federation across OpenStack clouds.

In the private cloud session, Stig Tefler gave a nice illustration of network performance for VMs showing how a number of aspects of virtualisation can be changed or removed to progressively improve network performance towards line rate. Alongside talks on private cloud, the parallel session looked at Legal, Policy and Regulatory Issues a critical issue for adoption of public cloud.  Steven Newhouse gave some practical insights from a recent cloud tender for EMBL. There is clearly need for further work around these issues so that the community can better informed about choices. This is something that the working group will be taking forward.

For this workshop, we experimented with an interactive session – bringing together a group of around 20 delegates to work together on some technical themes agreed ahead of time including bulk data movement and cloud-hosting of Jupyter notebooks. There was plenty of useful interaction and discussion but we will need to look at the networking provision for the next time to ensure groups can get on with technical work on the day.

We discussed next steps in the final session. There is clearly interest in taking particular areas forward from the meeting: focus groups on technical areas like HTC and use of parallel file systems with cloud or organised around specific domains within the research community. Training figured also, in the form of a cloud carpentry course so that researchers can more readily get up and running using cloud. Looking forward, in each of these case we’re looking for discrete activities with an agreed set of goals and something to deliver at the end. Where possible we’re seeking to support relevant work that is already underway and initiate new work where there are perceived gaps. We will be looking at running smaller workshops targeted at specific themes in the coming months as a means to engage and disseminate some of this work.

Phil Kershaw, STFC & Cloud-WG chair