Attention!
The content on this site is a materials pilot. It represents neither changes to existing policy nor pending new policies. THIS IS NOT OFFICIAL GUIDANCE.
Course overview
You made it to Course two! We’ll learn about all of the highest priority rows of the Rubric by the end of this lesson.
Health indicator | Topic |
Outcomes-orientation | Measuring and metrics |
State capacity | All the managements |
Admin | Course two - Check in |
Procurement flexibility | Assuring quality |
Iterative development | Consider the user |
Iterative development | Testing, testing, 1-2-3 |
Admin | Course two - Report out |
What this course covers
In the second course, you will get a chance to engage with more elements of the project Health Rubric. The goal of this course is to begin building more depth of understanding around software projects and their successful management. By completing lessons one and two, you’ll have covered all of the highest priority primary indicators of the Health Rubric.
Outcomes-orientation - Measuring and metrics
Understanding outcomes involves measuring progress, which can be notoriously difficult with software development. In this lesson, we explore what it means to assess progress, and whether a team can demonstrate progress against a set of reasonable metrics and baselines.
Ask to see how teams measure their progress against program, policy, and/or baseline metrics.
- Bad: Explanations of how the state will measure progress or program impact are incongruent or missing all together.
- Meh: Teams consistently articulate the impact they are targeting but do not have metrics or baselines.
- Good: Teams consistently articulate their target metrics and can demonstrate how they are doing against baselines.
State capacity - All the managements
A successful software project will have many people involved and those people will generally play different roles. In this lesson, we explore the kinds of management we might see in a long-running software project and how people in those roles can contribute (or take away from) the success of a project.
Ask how and when the team utilizes different types of expertise.
- Bad: Teams are not able to be staffed, are only staffed with one skill set/perspective (ex. only a PM), and/or do not include program expertise.
- Meh: Teams are staffed with project and program expertise, able to pull in other experts when needed.
- Good: Teams are staffed with project, program, technical, and other expertise.
Procurement flexibility - Assuring quality
Quality monitoring is an integral part of government contracting. For quality monitoring to be truly effective, it should speak to the behaviors that will lead to a successful delivery. This is a critical step towards acknowledging that software is not a product one buys, but an ongoing service that must be shaped and supported on an ongoing basis.
Ask how the team implements quality monitoring through their contracts.
- Bad: The state has little or no expectations for monitoring quality in their contracts.
- Meh: The state has vague expectations around monitoring quality in their contracts, mostly leaves monitoring to development teams.
- Good: The state can demonstrate how they monitor quality in their contracts and work with teams to make sure monitoring is implmented.
Iterative development - Consider the user
Testing is a sometimes misunderstood, yet critical aspect of a long-lived software project. This is especially true if you are trying to produce high-quality software in an iterative manner. This lesson will speak to the role that actual users play in the ongoing testing of a major software project.
Ask how the state incorporates the end user during the development & testing processes.
- Bad: The team only collects input from end users at the end of the process.
- Meh: The team collects feedback at the beginning of the process, but does not validate they are meeting user needs during development.
- Good: The team regularly collects feedback and tests to ensure the product improves the experience for end users.
Iterative development - Testing, testing, 1-2-3
What types of testing should a software project undergo and what does testing a project even mean? In this conversation, Princess Ojiaku, Matt Jadud, and Heather Battaglia talk about what kinds of tests are important and when they should happen. This conversation includes a guest appearance by SHOUTYBOX, the screen reader that lost its way. Bonus: a demo video of a screen reader.
Ask how the state approaches security, performance, and migration testing.
Ask how project leads interact with the testing process.
- Bad: The team cannot answer what types of testing they are doing, only that they test at the end of the process.
- Meh: The team can describe their testing approaches but testing is done by a siloed team.
- Good: The team can demonstrate their testing approaches. Testing and development is done by the same team.