Annapolis Developer Summit
OpenCHAMI developers will gather in Annapolis MD at The Westin on April 9th and 10th for our first in-person gathering specifically for developers. Accommodations will be made to involve remote participation as well. Over two days, we’ll set aside time to review where our software stands today and host workshops dedicated to User Interfaces, Developer Experience, Software Quality Standards, and Testing Requirements. We encourage anyone with an interest in these subjects to fill out our free registration form to receive further logistical and agenda updates. OpenCHAMI is not charging any registration or attendance fee.
A room block will be available at the Westin Annapolis at a government per diem rate. We will share the booking link with anyone that fills out the form.
Please register by March 15th by clicking here. This will help us get a head count for in person vs virtual and be able to accommodate any dietary, mobility restrictions etc.
User Interfaces for HPC Sysadmins
The authors of Manta and several other HPC CLIs will discuss what kinds of tools are necessary to manage a cloud-like HPC system. We’ll be working toward a set of principles that will guide us as we expand on the existing tools and better integrate with sysadmin workflows
Software Quality Standards and Developer Exeperience for OpenCHAMI
As more developers join the project, what rules and norms should we adopt and how should we communicate them? What do we want the experience to be for a first-time committer? How can each site make progress separately while ensuring that our collection of software meets our shared expectations?
CI/CD for HPC System Management
One of the ways that software quality can be asserted is through continuous integration and testing. For a system like ours, we need to identify what areas we want to test in isolation and which use cases need to be fully tested end-to-end. HPE has a continuous test system for CSM that is proprietary. NERSC and LANL are both proceeding with an automated testing procudure which is rooted in the upcoming Supercomputer Institute functionality. Where should we go from here? How can we give ourselves a testing score or testing target for our modular management system?