Approaches to Implementation

When considering a revision or redesign of a quality rating and improvement system (QRIS), a pilot can be a prudent approach to test QRIS elements before moving to full implementation. This section includes issues to consider when planning for and conducting a pilot QRIS. It also describes how some states used a phased-in approach as an alternative to full implementation when launching a QRIS.

Print the Approaches to Implementation section of the QRIS Resource Guide.

Conducting Pilot Programs

States may implement a pilot to examine the efficacy, sustainability, and applicability of new QRIS features, a QRIS redesign, or an entirely new system (in a state without a QRIS) before launching statewide. Some possible reasons to engage in a small-scale pilot or field test include the ability to do the following:

  • Target available funding in order to build support. Stakeholders may feel it more appropriate to start slowly and produce some positive results on a smaller scale as a way to garner support for statewide implementation.
  • Allow time for implementation approaches to be tested and refined before large numbers of programs are involved in the process. By investing the time and effort to conduct a pilot, a state can enjoy the benefits of customer and community feedback to better inform and revise the QRIS process.
  • Evaluate aspects of the system, such as rating scales or professional development supports. For example, a state may be considering different rating scales and may like to compare them in a controlled way rather than launch something on a larger scale that needs later revision.
  • Assess potential program participation and capacity for implementing a new QRIS statewide. A pilot can allow for better budget estimates and planning processes.

According to the QRIS Compendium Fact Sheet: History of QRIS Growth Over Time (2017), by the National Center on Early Childhood Quality Assurance, in addition to the 41 fully operational systems in 2016, 4 QRIS (10 percent) were in a pilot phase.

 

Many factors influence how and where to conduct a QRIS pilot, including the availability of funding and whether the features to be tested in the pilot are best examined in a specific area of the state or with one type of program. When piloting a new QRIS before going statewide, some states started with a limited number of program participants, a selected geographic area, or particular program types. When making decisions about how to target the pilot, it is important to consider the context and questions of interest. A state assessing the climate and overall response to a QRIS may pilot with a limited number of programs but recruit participants across program types and geographic regions. In contrast, a state interested in understanding the resources needed to implement the rating process (including observational assessments) may pilot with one program type in one or two geographic regions.

For example, if a new coaching model is being tested for a QRIS, a state may choose to pilot the model in a selected geographic area where coaches are already trained as a way to minimize start-up costs. The focus of the pilot would be on providers’ responses to the coaching and making a determination of its effectiveness. If, however, the state is more interested in understanding the feasibility of implementing a coaching model (learning whether coaches can be trained to deliver the model with fidelity), they may instead conduct the pilot in multiple regions statewide and focus on the process of recruiting and training coaches.

The length of time a state will maintain its QRIS pilot phase is often determined by the amount financial resources; stakeholder, participant, and community support; and whether the goals for the pilot have been met. Pilots of QRIS features or a redesign can grow slowly by adding new communities or additional provider types. Pilots can last from a few months (Pennsylvania) to 1 or 2 years (Delaware, Kentucky, Missouri, and Ohio) to multiple years (Indiana and Virginia).

 

The goals the state and its partners set for the pilot will influence what data will be collected and by whom, how it will be recorded, and how it will be analyzed and used for adjustments and refinements. QRIS standards are generally informed by and aligned with existing standards such as licensing, national accreditation, Head Start, prekindergarten, or state early learning guidelines. The pilot is often used as a way to test a major change or a redesign. The following can be tested in the pilot: procedures for program application, rating processes, documentation methods, level assignments, the provision of quality improvement supports, and ways to communicate outcomes. Efforts to address equity in the QRIS among participating programs and the children and families they serve may also be addressed in a pilot.

The following are the types of data that can be collected in a pilot:

  • Participation rates (overall rates, as well as rates by facility type, size, level, and geographic location);
  • Characteristics of children served (race, income, subsidy status, home language, special needs) in the QRIS programs;
  • Percentage of providers that are able to meet various quality criteria (such as degree requirements);
  • Usage rates for incentives and support services, such as professional development or training opportunities, technical assistance supports, or financial incentives;
  • Number and percentage of children receiving subsidies served by participating providers;
  • Program participation rates at varying levels of quality;
  • Baseline data from assessment tools;
  • Parent/consumer awareness of QRIS; and
  • Feedback from providers on clarity and ease of process and forms/documents.

Data can be collected in a variety of ways and from a variety of sources. The centers and homes involved in the pilot can provide critical feedback through self-assessments, self-reporting, and documentation. The staff involved in managing the pilot can collect feedback through interviews, observations, and document reviews. Staff can collect information about the following: the clarity of explanatory documents, standards, and the application process; sources of evidence or documents to include or accept; the amount and complexity of paperwork; time required to complete various requirements; and availability/accessibility of appropriate training opportunities.

It is important to consider a state’s capacity to gather appropriate and sufficient data to assign accurate ratings, redesign standards, implement procedures, or develop or change providers’ supports. Gathering data that seems interesting is only a worthwhile exercise if it is used at some point to inform the system. Otherwise, the process can become costly and frustrating, and can be perceived as unresponsive. Many states have asked researchers to evaluate their QRIS pilots. Researchers can be helpful in selecting the most appropriate data elements for monitoring and implementation as well as for process and formative evaluations.

 

Once a state and its partners determine they are ready to move from a pilot to statewide implementation, it is important to develop a detailed plan and timeline for implementation. An analysis of available funding, along with each partner agency’s capacity to implement and manage the system, will also be critical factors in this process.

Most states subcontract the management of some QRIS components like technical assistance and coaching or onsite data collection. States may have an existing systems in place, like professional development systems, that can be leveraged to support the QRIS and the new features being added as a result of the pilot. States may add to the scope of work in existing contracts they have with child care resource and referral networks and postsecondary institutions to support QRIS activities. States may also issue a request for proposals process to select and engage organizations in implementation.

As a state makes changes to its QRIS based on a pilot, it is critical to consider the implications for consumer education and a QRIS website. It may be necessary to communicate changes to the system and the possibility that program ratings may change as a result of the redesign or new QRIS feature. Additional information on communicating with families is available in the Consumer Education section of the QRIS Resource Guide.

A pilot or field test is not always feasible. If a state chooses to move forward with changes to the QRIS or implementation of a new system without piloting, it is critical to engage providers and other partners and stakeholders in a strategic implementation process. Although much information can be gleaned from research and lessons learned in other states, it is important to remember that each state is unique. A state must consider its landscape, history, infrastructure, and overall early and school-age care and education environment, and adapt the information to its particular set of circumstances. Data collection and monitoring during implementation are vital activities. States can engage an evaluation partner or use internal resources to administer web surveys or to conduct focus groups with parents and programs to supplement QRIS administrative data.

QRIS Compendium Fact Sheet: History of QRIS Growth Over Time (2017), by the National Center on Early Childhood Quality Assurance, notes that, as of 2016, 12 QRIS (29 percent) were rolled out statewide without first going through a pilot phase.

 

Phasing In Programs

A phased-in approach to a redesign or a new QRIS may be necessary due to limited funding and staff resources or a lack of broad support. However, policymakers should be aware that anticipated changes in program quality may not occur with incremental implementation as each element of a QRIS is dependent on the other. States will need to consider what resources and supports are needed to increase participant quality while also addressing gaps in existing capacity or infrastructure.  A phased-in strategy requires careful consideration of which approaches to administration, monitoring, provider supports, and incentives are most likely to be cost-effective in terms of improving quality, ensuring accountability, and increasing participation.

It is also important to realize that a limited implementation strategy is only the first step toward a comprehensive, statewide QRIS. The value of expansion to a statewide QRIS is that it allows all parents and providers to benefit, provides a consistent standard of measurement, and improves opportunities for resource realignment. Planning for full, statewide implementation and the projection of total costs should be part of the process, even when a phased-in approach is necessary.

Making decisions about how and when to phase in implementation of a QRIS can be guided by the cost projection process. The Provider Cost of Quality Calculator (PCQC) described in the Cost Projections and Financing section of the QRIS Resource Guide can help with projecting costs at scale. It can also help guide decisions regarding where and when to reduce costs, if necessary. It is possible to develop multiple cost projections for a statewide program using the PCQC. Projections can be made for strategies, such as the following:

  • A comprehensive plan that anticipates full funding for the next 5 years for each component of a fully implemented QRIS;
  • A midrange or scaled back plan to get started and build support for future expansion (e.g., limited participation, reduced provider incentives); and
  • A basic program with fewer provider supports and incentives and fewer accountability measures.

In addition to projecting the cost of various implementation strategies, several other factors may influence decisionmaking about when to fully implement a QRIS. These include the following:

  • Rate at which changes are made to QRIS standards or criteria. Changing them too quickly after implementation may be difficult for providers and could potentially erode their trust in the system and their feelings of success and confidence. Generally, states revise QRIS approximately every 3 to 5 years. Small changes can be made annually, especially changes that are responsive to participant feedback.
  • Financial incentives and supports. Making a range of financial incentives and provider supports available early on is likely to increase provider participation. Limiting or targeting incentives and supports is likely to slow participation growth.
  • Level of participation. Early and high levels of participation will affect how people view the success and value of the program and are likely to help build support for increased funding.
     
Alabama Piloted Quality STARS

Alabama worked with a group of state-level stakeholders for a little over 3 years to develop a five-star rating system, Quality STARS, for child care centers. In October 2013, Alabama launched an 8-month pilot that was conducted by the University of Alabama. The pilot included 50 participating centers that met the licensing requirements. Focus groups assessed the effectiveness of the new system, and ratings were given in the form of feedback to the participating centers. The ratings were based on information submitted by the center and two onsite assessments: The first was a 3- to 4-hour assessment of the center’s leadership and management practices using the Program Administration Scale, and the second was an Environment Rating Scales assessment of randomly selected classrooms. Programs that volunteered to participate after the pilot ended and Quality STARS went live received ratings that were good for 3 years. Alabama started piloting a model for family child care centers in 2017.

Participation Targeted in the Rollout of a QRIS in Arizona

Six hundred programs throughout the state were selected to participate in the first phase of Arizona’s Quality First. Four hundred of these programs were center based, and two hundred were family child care homes. This represented roughly 10 percent of the state’s centers and 5 percent of its homes. The first step in the selection process was to use the percentage of regulated settings (licensed and certified centers and homes) by region to equitably divide the available slots among regions, thus reducing geographic and rural against urban competition. Then, the following selection criteria were applied, each of which had different point values related to First Things First and state agencies’ priorities:

  • Percentage of children enrolled in child care subsidy (in three tiers, with the higher percentage earning higher priority points)
  • Percentage of children enrolled who qualify for free and reduced lunch
  • Whether the program was a full-year program
  • Whether the program was a full-day program
  • Whether the program served children on weekends or evenings
  • Whether the program had never (or in the last 3 years) been accredited
  • Whether the program had never (or in the last 3 years) participated in any of its state's quality improvement initiatives (such as a self-study program through Child Care and Development Fund monies or a United Way Hands on Quality initiative)
  • Whether the program served infants or toddlers

These criteria were used to rank applicants within a region from highest to lowest point value.

Indiana Implemented First QRIS at the Local Level

The following timeline highlights Indiana’s approach to launching its Paths to QUALITY QRIS:

  • The Paths to QUALITY initiative was launched in 2000 by the Early Childhood Alliance in Allen County, a family support organization that offers child care resource and referral services.
  • One year later, the initiative expanded to four surrounding counties served by the Alliance, with incentives secured through local community foundations.
  • In 2005, 4C [Community Coordinated Child Care] of Southern Indiana implemented Paths to QUALITY in 11 counties with the support of a local community foundation.
  • In May 2006, the Bureau of Child Care, Indiana Family and Social Services Administration convened a state child care quality rating system advisory group and began considering the feasibility of implementing a statewide QRIS.

In March 2007, the Bureau of Child Care and the Early Childhood Alliance signed a license agreement to adopt Paths to QUALITY as the state’s QRIS.

Montana’s QRIS Redesigned for Expansion

Montana’s Star Quality Child Care Rating System started operating in 2002; an inclusive and broad-based participatory review began in late 2007. The Stars redesign process became the state’s strategic plan for all early care and education, not just subsidized child care. The goal was to have the professional development and infrastructure support to help providers increase quality whether they were formally enrolled in Stars or not. The field test of the new system began in June 2010. In May 2012, at a STARS event for directors, there was a unanimous vote to extend the field test. The Early Childhood Services Bureau (ECSB) received additional unexpected funding in early 2013, which allowed planning and implementation of a phase II for the field test. There were updates to both the center matrix and the family and group matrix for phase II. All changes and updates directly resulted from provider and coach feedback, as well as information and data gathered by the ECSB. The original Star Quality system had three levels: licensing, one level above licensing, and national accreditation. The redesign focused on adding gradual steps and increasing supports to encourage participation. The newer Best Beginnings STARS to Quality system had five levels and included the following:

  • Research based criteria
  • Workforce support through the Montana Early Care and Education Career Path, encouraging professional development along a continuum of training
  • Maintaining quality over time; renewal based on validation of level and program improvement plan
  • Monetary incentives for continual program improvement based on level achieved
  • Resources and support to move through the levels provided by child care resource and referral agencies, the Early Childhood Project, and other state-determined resources

Programs using the Pyramid Model incorporated the following program assessment tools as part of their coaching experience: Environment Rating Scales (ERS), Program and Business Administration Scales (PAS and BAS), Center on Social and Emotional Foundations for Early Learning (CSEFEL), Teaching Pyramid Observation Tool (TPOT), and Pyramid Infant Toddler Observation Scale (TPITOS).

New York Field Test Informed Revisions

A field test of QUALITYstarsNY, coordinated by the NY Early Childhood Professional Development Institute, City University of New York, was completed in 2010. The goals of the field test were to:

  • Evaluate the ease and efficiency of the process of QUALITYstarsNY’s application, documentation, and assessment system under a variety of community conditions (high/low presence of quality improvement supports, geography, program setting types, demographics of children)
  • Validate the standards and the rating scale, i.e., determine whether the points weighting is accurate and whether the star ratings distinguish levels of quality
  • Demonstrate the value/use of community supports for quality improvement
  • Gather information about what kinds of improvements programs plan to make to move up in the system; this was done to inform content and the nature of later support efforts

An independent evaluation was conducted as part of the field test to assess the validity and reliability of the draft program standards. The evaluation data informed decisions necessary for the statewide implementation of QUALITYstarsNY. Based on the field test, the standards for center-based and family-based programs were revised to better reflect the feedback from programs and providers. NY also has standards for public schools and has tested the draft version of standards for school-age child care programs in some programs across the state.

Oklahoma Made Adjustments in Response to Feedback

The first QRIS launched in Oklahoma in 1998. Reaching for the Stars included only two star levels. One year later, the state funded a third star level for programs that met two-star standards and were nationally accredited. Following two years and lagging participation levels, program designers recognized that the gap between one-star licensing and two-star standards was greater than most providers could accomplish. They therefore created a midpoint—the one-star-plus level—that provides financial incentives and recognition for providers that need more support to progress to higher star levels.

National Center on Early Childhood Quality Assurance. (2017). QRIS compendium fact sheet: History of QRIS growth over time. Retrieved from https://childcareta.acf.hhs.gov/resource/qris-compendium-fact-sheet-history-qris-growth-over-time

Mitchell, A. W. (2005). Stair steps to quality: A guide for states and communities developing quality rating systems for early care and education. Alexandria, VA: United Way of America. Retrieved from https://www.researchconnections.org/childcare/resources/7180

Zellman, G. L., & Perlman, M. (2008). Child-care quality rating and improvement systems in five pioneer states: Implementation issues and lessons learned. Arlington, VA: RAND Corporation. Retrieved from http://www.rand.org/pubs/monographs/2008/RAND_MG795.pdf