Data Collection and Evaluation

Data collection and evaluation are central activities in a quality rating and improvement system (QRIS). Pressure to complete a system redesign or launch a new QRIS  can make a focus on data and evaluation challenging. However, it is important to elevate data collection and evaluation in system planning rather than waiting until a challenge or question arises that is best addressed through evaluation. Prospective planning ensures that data collection activities are maximized and take into account future evaluation questions. Yet, it is never too late to engage in data collection and evaluation activities that can inform system improvement. This section poses questions and offers tools that can be used early—and later—in QRIS implementation to collect data and answer critical evaluation questions. Discussions on the use of data in planning and implementation are included in the Initial Design Process and Approaches to Implementation sections of this guide.

Quality Rating and Improvement System State Evaluations and Research (2018) from Child Care & Early Education Research Connections provides a comprehensive list of state QRIS evaluations and research in the Research Connections collection.

Print the Data Collection and Evaluation section of the QRIS Resource Guide.

Collecting Data

All states have data systems that contain information on early and school-age care and education programs. In deciding what data to collect, states should first identify the questions it wants to answer and how the data will be used.  Some data sources that may be helpful for QRIS include the following: licensing; registries of license-exempt providers; subsidy administration; practitioner and training/trainer registries; child care resource and referral (CCR&R) databases; technical assistance tracking systems; program profiles; classroom assessments; economic impact research studies; and Head Start, prekindergarten, and other education systems. An initial step in planning for a QRIS or designing an evaluation is to compile a list and description of existing state/territory data systems, including where they are located, how to access them, who has access to them, what information is collected in them, and how they interface with other data systems.

Data Resources Analysis for Decisionmaking

Completing an inventory of the available data at the beginning of the planning and design stages is a helpful first step. The information gathered during this process can then be used to guide decisions during the implementation phase. For example, data from the licensing system or Head Start Program Information Reports may help the QRIS design team determine, at least initially, which types of programs (center, home, prekindergarten, Head Start) to include in the QRIS and which and how many programs may be able to achieve the standards. Data from workforce studies or professional development registries can provide a needs assessment of scholarships and educational offerings. This information will help estimate participation rates and predict the resources necessary to support projected participation. Looking at these data elements may reveal existing information that can help document compliance with proposed standards. Reviewing an inventory of existing data can also help determine whether it is best to begin with a pilot and, if so, which programs to include.

Child care subsidy data can also be helpful. Examining these data may lead to the conclusion that tiered subsidy reimbursement will not be sufficient as a support of higher program quality for a number of reasons. For example, if only 20 percent of the enrollment of a typical program are children who receive child care subsidies, that may not be sufficient to support the cost of higher quality for the program as a whole. The balance of the cost must be passed on as tuition fees to other families. Or the enrollment may fluctuate enough that programs cannot rely on tiered subsidy reimbursement to maintain quality. Therefore, subsidy data may be a good indicator of the potential impact of tiered subsidy reimbursement, pointing out the need to explore additional provider incentives.

Data for QRIS Management

Using existing data systems can help make QRIS implementation more cost efficient and ensure consistency in data across systems. Adding reporting capacity or data elements or aligning data elements to an existing data system, such as licensing or a professional development registry, can be much less expensive than creating a new data collection and processing system specifically for QRIS. This may or may not be possible, depending on who administers the QRIS and what data systems can be tapped for the information. For example, if the existing data system is in a state agency and the QRIS will be operated outside of the state government structure, it may not be possible to use the state data system. Even when data exist in several separate systems, it may be cost-effective and ensure consistency if data can be transferred from one system to another, rather than entering all data anew for each child care program that wants to participate. For example, one QRIS requirement for participation might be a license in good standing or a license with no serious violations. It would be critical to have continuing, current information on the status of a license to produce reliable ratings. Similarly, if programs that participate in the QRIS are also rated or assessed by other entities, such as national accrediting organizations or the Head Start monitoring system, using data from those systems can make participation easier, more cost-effective, and more reliable. Linking to data in professional development registries or credentialing and certification systems is another cost-effective way to verify staff qualifications, ensure consistency, and eliminate duplicative work in the rating process.

In summary, an accurate inventory of existing data systems, their accessibility, accuracy, and reliability is helpful in determining QRIS system design. A good introduction to data elements, collection, management, and governance is found in the slides and videos of the Early Childhood Data: Building a Strong Foundation webinar series presented by Quality Initiatives Research and Evaluation Consortium (INQUIRE) in spring 2013. An overview of the use of data to monitor and evaluate QRIS in five states may be helpful in thinking about the broad perspective of using data (Caronongan, Kirby, Malone, & Boller, 2011).

States are increasingly relying on comprehensive data systems that they either purchase or develop to help with the administration of their QRIS.[1] This section of the guide focuses on identifying the data needed and whether they can be collected from existing systems or if new data collection mechanisms need to be developed.
 

 

[1]As a resource to state agencies, specific products, vendors and systems are referenced throughout this document. However, the Office of Child Care and the National Center on Early Childhood Quality Assurance do not endorse any non-Federal organization, publication, or resource.

 

Looking closely at each QRIS standard and determining how compliance will be verified, what data for documentation will be needed, who will review the data, and where data will be stored are essential steps in QRIS planning. New data may be needed to assign a rating or to guide follow-up activities, such as development of an improvement plan. For example, QRIS standards may require that all teaching staff receive training in a state’s early learning guidelines for a certain rating level. If completion of the training is collected in the professional development registry, it may be possible to import information from that system for the rating process. If the information is not currently collected, it may be necessary to develop a process for collecting that data, such as requiring program staff to document their trainings by submitting successful-completion certificates, requiring rating assessors to enter information into a new QRIS database, or asking early learning guideline trainers to input their class lists into the professional development registry. A thorough review of the rating assessment and monitoring process is needed to identify data to document compliance with QRIS standards. Once a QRIS is implemented, this data will also be invaluable in informing and guiding needed modifications.

 

Data systems are a valuable resource for staff who manage the QRIS provider support system. Two types of data may be useful to them: (1) data on supports for individuals working in the early and school-age care and education programs, and (2) data on supports for the programs that seek QRIS ratings.

Data on supports for individuals working in the programs are helpful in projecting and managing the cost for scholarships for staff education and any type of retention incentives, such as wage supplements. These data can also help determine the effectiveness of various supports. Is the education level of the staff across the state going up? Are there any geographic areas not using scholarships? If not, why? Answering these questions requires data that are specific to QRIS participation. If, for example, a state currently has a scholarship program that is available to all early and school-age care and education providers, knowing which of these staff work in programs that participate in the QRIS is crucial. These data, coupled with broader data on staff qualifications, can help identify trends and inform decisions regarding the capacity of practitioners to meet QRIS standards and how to best support continuous improvement.

Collecting data on technical assistance and other supports for programs is usually a more complex process than collecting data on individuals working in the programs. Often programs that participate in a QRIS have access to technical assistance, including consultation and coaching supports. These supports might be available to a broad group of programs, including those that do not participate in the QRIS. Thus, it is important to create data systems that identify which supports and how much of each is received by each program participating in the QRIS. It is important to think carefully about what data about program supports needs to be collected, including data on new supports that may be created and accessible only to programs participating in the QRIS.

The QRIS planning team should think carefully about how program support information will be used. Will the data identify participating programs that access supports and how often? Will it be used to determine the correlation between supports accessed and improvements in program ratings? Will it be used to manage the cost of such supports or to monitor the effectiveness of support service providers? Being clear about the projected use of data will help to define what is collected and how.

Collecting data on financial supports for programs that participate in QRIS, such as grants, bonus payments, tiered reimbursement, loans, or tax benefits, can help project and manage budgets. Again, it may be very useful to correlate data with the maintenance or improvement of ratings. This will help identify which supports are most critical.

In many states, the QRIS becomes an organizing framework for a wide range of program and practitioner supports designed to promote quality improvement. States have moved from providing technical assistance and financial supports that are believed to improve child care quality to using the QRIS to track whether these supports are actually associated with quality improvements.

 

The exploration of what data might be needed is best done early in the process and with a broad view to future needs. In the planning and design phase, considering how to verify the standards has become increasingly important to states. Assessing the impact of key interventions to assist programs in improving quality is critical to project management. Within the rating process, it is becoming crucial to coordinate assessment of the QRIS ratings across sectors (i.e., child care, prekindergarten, Head Start) in a way that reduces the duplication of multiple assessment processes. In preparation for evaluation, consider the benchmarks that are being set and how to document their achievement, including coordination of standards using data from other assessment processes, such as accreditation, Head Start performance standards, and prekindergarten standards assessment.

Implementing and Evaluation

QRIS evaluation is essential for supporting continuous system improvement. Evaluation results can inform four activities that shape how the QRIS evolves:

  1. Identifying implementation successes and challenges. At any stage of QRIS implementation, but particularly when the QRIS is newly launched or has undergone a major revision, evaluation can reveal which activities are working well and which activities need attention. Findings from focus groups or surveys with the implementation team or with providers participating in the QRIS add context and depth to administrative data. For example, administrative data can be used to track provider enrollment in the QRIS and to see how enrollment patterns differ across regions of the state. Additional data collection, such as surveys with eligible providers, can provide insights into the motivations and experiences of providers that underlie the patterns observed in the administrative data. Evaluation results can inform the development or revisions of marketing materials, implementation partner communication protocols, roles and responsibilities of staff, the content of training sessions for staff, and the development of orientation materials for providers.
  2. Examining the effectiveness of new and existing activities in the QRIS. Marketing, recruitment, distribution of financial assistance, provision of technical assistance, and assignment of program ratings are QRIS activities that require significant investments of staff and financial resources. Evaluation is a critical tool for learning about the effectiveness of QRIS activities and identifying whether and how different activities are contributing to intended outcomes.
  3. Documenting outcomes for stakeholders. Stakeholders for QRIS expand beyond state agencies and implementation partners and include providers, legislators, parents, and business and community leaders. These stakeholders are eager for information about QRIS outcomes. It is important to set clear expectations for outcomes that align with the stage of QRIS implementation. For example, early in implementation, realistic outcomes include program enrollment and engagement in quality improvement activities. Realistic outcomes at later phases of implementation include increased awareness of the QRIS among the public, greater density of program participation, increases in program quality, and provision of quality at the highest levels of the QRIS.
  4. Engaging in short- and long-term planning. Evaluation results can inform immediate adjustments to the QRIS and support development of plans for the future. For example, an implementation evaluation typically produces results that can be acted on right away to address challenges or to expand activities that are working well. Evaluation results also can be used to set long-term goals for outcomes, such as quality improvement. Results may support projection of the expected pace of improvement among programs, which can help with planning for technical assistance staffing and distribution of financial incentives to participating programs over 5 years or longer.

Using a QRIS logic model provides the guiding framework for evaluation efforts and development of an evaluation plan. The Quality Rating and Improvement System (QRIS) Evaluation Toolkit (Lugo-Gil, Sattar, Ross, Boller, Tout, & Kirby, 2011) includes a chapter that serves as a workbook for logic model development. The key steps to developing a logic model include the following: (1) describing the context and environment for the QRIS and articulating the QRIS goals; (2) identifying the inputs and the resources needed to support the work; (3) outlining the implementation activities; (4) indicating the outputs that can be tracked; (5) articulating short-, mid-, and long-term outcomes; and (6) linking expected outcomes with activities to identify any gaps or unrealistic expectations about the impact of the QRIS. The “Initial Design” section of this guide includes information about the Massachusetts QRIS logic model.

When embarking on logic model development, it is important to convene a group of stakeholders to inform the process. The logic model should reflect connections to other systems (e.g., licensing, professional development) and serve as a platform for identifying and leveraging implementation resources and cross-system evaluation opportunities.

Once the logic model is complete, it can be used to develop an evaluation plan. An evaluation plan contains the following: research questions (with a designation of their priority levels); the data elements needed to address the research questions; preferred timing for each research question; whether the data are currently available or need to be collected; an estimate of the cost for each type of research question; a note about whether the evaluation can be conducted internally or whether an external evaluator should be identified; and strategies for disseminating results.

It is important to designate a staff person within the QRIS implementation team to be the coordinator and facilitator of the work. Evaluation planning will be challenging to launch and manage if it is not assigned as an explicit work activity. If possible, it will also be important to engage an experienced evaluator to help guide the process of evaluation planning.

In the same way that logic model development will benefit from stakeholder participation, it is helpful to invite community stakeholders to be part of the evaluation plan development. Key stakeholders for evaluation planning include state agency partners, local or national funders, university partners, or other research partners who can bring new ideas, resources, and even evaluation capacity to the process.

The evaluation plan and the logic model can be viewed as living documents that should be revisited on a regular basis to ensure that they still reflect QRIS priorities and features of the system. Both documents can help guide planning if funding opportunities become available or if opportunities arise to evaluate other parts of the early learning system that offer a platform to add research questions related to the QRIS.

Developing an Evaluation Plan

Because a QRIS serves as a systemic structure with activities to support multiple goals related to program quality, children’s development, and provision of information to parents and caregivers, there are many possible research questions to address through QRIS evaluation. A key planning task is to identify the research questions that will be most beneficial for informing system improvement. Research questions could be developed to understand implementation and outcomes for each of the primary QRIS activities, for example: program recruitment, technical assistance, program ratings, financial incentives (including tiered reimbursement), consumer education/dissemination of ratings, and system access and equity. Research questions may be prioritized to reflect the areas of the QRIS requiring the largest investment of resources, areas of particular concern in QRIS functioning, or areas required by a funder.

A critical aspect of planning and selecting research questions is making sure they align with the QRIS stage of implementation. Exhibit 1 provides general guidance about matching topics with the QRIS stage.

Exhibit 1. Matching Evaluation Topics to the Stage of QRIS Implementation

Matching Evaluation Topics

Sample QRIS Research Questions

The following is a set of sample research questions related to QRIS participation and ratings, program quality improvement, effectiveness of financial incentives, access to high-quality programs, validity of QRIS ratings, and use of QRIS ratings by parents and the public.

  • What are the characteristics of programs enrolled in the QRIS?
  • How effective are QRIS recruiting efforts with different types of providers (for example, urban versus rural, centers versus family child care programs, and programs serving a high proportion of children who receive subsidies)?
  • Do the characteristics of programs that are not enrolled in the QRIS differ from those that are enrolled? For example, are there differences in key characteristics, including geography, program type, funding, or director qualifications?
  • What is the distribution of program sites across quality levels?
  • What are the differences in program characteristics at each rating level?
  • What are the characteristics of children who have access to high-quality programs?
  • Which providers are improving and what resources are used for improvement?
  • What is the quality of the program’s learning environment as measured by an independent measure of quality?
  • Are observed quality scores improving over time among programs in the QRIS? How is this related to quality improvement investments?
  • What are the perceptions of non-enrolled early care and education (ECE) programs on QRIS?
  • What is the reach of technical assistance in the state?
  • What are the characteristics of teachers and family child care providers who receive onsite technical assistance?
  • Does the stability of the early care and education workforce increase over time?
  • How does the system’s training and professional development impact child outcomes of interest? Provider outcomes of interest?
  • Do parents know and use the QRIS to make ECE decisions?
  • What are parent perceptions of program services and quality?
  • How are website visitors using information to search for ECE programs (e.g., star ratings, distance, search terms)?

The Illinois Early Learning Council (Data, Research and Evaluation Committee) Research Agenda (2015) is an example of a research plan (not limited to QRIS). This group uses an overarching frame for its plan in which it asks, “What information would cause us to behave differently in policy and practice in ways that would likely lead to better outcomes for young children?”

 

 

Ideally, evaluation should be planned for as soon as a QRIS is designed so that evaluation can address each of the four purposes outlined in the “Evaluation Purpose” section. In addition, planning early can be efficient, especially when data collection and system activities are planned with the evaluation in mind. For example, a needs assessment conducted when planning the QRIS could also serve as baseline data that could be used to chart progress over time. In addition, QRIS data collection protocols for assigning ratings and technical assistance case management data will be better suited for evaluation if data elements needed to address high-priority evaluation questions are identified in advance. Planning for evaluation at the outset does not necessarily mean that an evaluation should be launched immediately but rather that the building blocks are in place for evaluation when the timing and resources are appropriate.

The timing and focus of evaluation should be matched to the stage of QRIS implementation (see Approaches to Implementation section). Some research questions may benefit from an annual study while others may only need to be addressed every 3 to 5 years. For example, a survey to understand provider experiences in the QRIS may be useful to launch annually, particularly early in QRIS implementation, so that adjustments can be made to the QRIS in response to the findings. In contrast, an examination of children’s development in programs at different levels of quality could be planned for a 5-year cycle to allow for system changes to be more established before investing in an expensive data collection effort. When establishing different timeframes and focal points for evaluation efforts, it is important to develop messages for stakeholders that convey the value of ongoing evaluation and how it will support system improvement.

Though planning for evaluation as part of QRIS design is ideal, it is never too late to engage in QRIS evaluation. An evaluation plan can be developed at any time during implementation. There may be some limitations in availability of data that can be used for evaluation, but these challenges can be addressed. It may be useful to work with an evaluation consultant to assess needs and capacity and to support evaluation planning once a QRIS is underway.

The data used in QRIS evaluation typically come from existing administrative data—that is, data collected for the purposes of administering the QRIS—and new data collected exclusively for the purpose of research and evaluation. The process of identifying data and developing data protocols for QRIS administration described in the first part of this section can be very useful when planning for QRIS evaluation. A comprehensive data matrix that describes the available data in the early care and education system can provide evaluation planners with information about what exists already in the system and what new data would need to be collected to support evaluation.

Exhibit 2 provides an overview of the types of data elements that may be useful for QRIS evaluation.

Exhibit 2. Possible Data Elements to Support QRIS Evaluation

Possible Data Elements to Support QRIS Evaluation
 

When using administrative data for QRIS evaluation, it is important to be aware of potential limitations. The following questions can be asked to learn about the data:

  1. What is the data coverage? Does it include all ECE programs, the entire ECE workforce, all geographic areas of the state, all children, all families? Data sets are typically limited in specific ways that are relevant to evaluation. For example, it is important to know if a provider registry is voluntary and the proportion of eligible providers that are included in the data.
  2. Are there duplications in the data? It is useful to know if counts or frequencies calculated in the data are taking into account the fact that a program or provider, for example, may be included in the dataset more than once. Unique identification numbers are helpful for dealing with this challenge.
  3. What is the quality of the data? Before analyzing data, know whether procedures are in place to ensure accuracy and reliability of the data. Staff entering data, for example, should receive training and be monitored over time to ensure they are following data quality protocols.
  4. What is the availability of historical data? In some cases, only current data or data from a limited time period are available. The existence of archived data will determine whether it is possible to ask certain research questions that require the availability of data over time.

Child Care & Early Education Research Connections provides Working with Administrative Data (n.d.) a web page of resources organized by topic, including managing, analyzing, and linking administrative data and issues related to data confidentiality and security.

New data collection can fill gaps in data elements not covered by administrative data. When possible, consider using or modifying existing measures or surveys to facilitate comparisons and improve data quality. It is also important to consider the samples that will be tapped for data collection and the response rates of data collection efforts.

Two broad types of data can be collected: quantitative and qualitative. Quantitative data are expressed numerically and define a construct (e.g., a quality score). These data are collected through surveys, administrative data, structured observations, and direct assessments. Comparisons are made using statistical analysis. Quantitative data are useful for estimating trends, analyzing group differences, and understanding the factors that link to changes over time. Qualitative data describe a construct. Data include responses to open-ended questions that are used to describe perceptions, experiences, concerns, and recommendations for improvement. Qualitative data are usually collected through focus groups or interviews, and key themes are coded and reported. Qualitative data are useful for understanding complexity of experiences and underlying motivations.

The costs of evaluation activities vary greatly depending on the type of activity, the scope of the research questions, and whether a third-party will be contracted to do the work. It is useful to examine the relative costs of different data collection activities commonly requested for QRIS evaluation (Exhibit 3). Observations of classrooms and family child care homes and collection of child development data are relatively more expensive data to collect than survey and focus group data or use of existing administrative data.

Exhibit 3. Comparison of Relative Cost of Different Evaluation Activities

Comparison of Relative Cost of Different Evaluation Activities

 

 

Choosing an evaluator is an issue that states address within the restrictions of their resources and the state bidding and contractual requirements. Other considerations that also influence the choice of evaluator should be incorporated in the request for proposal, including the following:

  • Qualifications and experience: States look for evaluation teams with qualifications that match the task, i.e., early childhood and research qualifications and experience with QRIS research. They also look for evaluators who have experience completing the research within contract requirements.
  • Credibility: Potential evaluators should be highly credible to the primary target audience. This is one of the reasons that many states use their own state universities, even though those universities may bring in national or out-of-state experts to partner on selected portions of the evaluation.
  • Stability: If plans call for conducting a series of evaluations, an organization’s longevity in the field and probability of continuing in the work will be important traits to consider.

As noted, evaluation studies serve multiple purposes, including the provision of evidence-based insights into the design or implementation process, and informing funders and policymakers of the impact of the QRIS on child care programs and child outcomes. A strong communications strategy is needed to relay information..

It is important to plan a communications strategy at the beginning of each evaluation activity. Stakeholders should be involved in this planning effort. The plan should include details about which types of products will be developed and how they will be disseminated to different groups. Audiences and specific considerations for communications include the following:

  • Providers: Consider multiple outreach strategies (such as videos and flyers) that use different communication techniques. Identify options for public forums, such as town hall meetings, that facilitate two-way dialogue and give providers the opportunity to ask questions about the findings. Talking points should be developed for technical assistance providers and licensors to help them communicate key messages, results, and implications for providers.
  • Policymakers: Develop factsheets that provide vital information on the background of the program or initiative. Brief documents should define the problem, the intervention/approach, the results, and recommendations.
  • Funders: In addition to the considerations for policymakers, include data to provide important context or rationale for the study or resulting recommendations, such as public opinion data or state or local population indicators.
  • Parents: Communicate clear messages with brief details about the goals and objectives of the QRIS. Ensure that key terms such as “quality” are defined using simple, plain language.

Overall, research summaries shared publicly should use plain language, simple formatting, a question-and-answer structure (or other straightforward headings), and provide links to full technical reports and contact information.

Georgia Built Its Own Data System to Manage QRIS

Georgia had a comprehensive online data system to manage the entire QRIS process from a child care program’s application to the QRIS, Quality Rated, to data collection and analysis including program information; training and technical assistance from registration to tracking; portfolio submission including a continuous quality improvement (CQI) plan; incentives management; resources for families, programs, and technical assistance and training professionals; reports and data; and communication. The system captured all information on a child care program and allowed the program to track its progress through the process from application to rating. The development of the system was guided by Georgia’s work with their researcher, the Frank Porter Graham Child Development Institute, which helped them develop a logic model as well as a validation and evaluation plan for their QRIS. The Institute also helped them create a data dictionary and reports. The system was used by Quality Rated staff, technical assistance and training staff, child care resource and referral agencies, programs enrolled in Quality Rated, incentive partners, the research team, and most importantly, families seeking child care and resources.

Indiana QRIS Data Systems Were Interactive

The Indiana Paths to QUALITY program used a live, interactive database that drew facility and practitioner information from the state regulatory system. Mentors from the child care resource and referral agencies and Indiana Association for the Education of Young Children helped develop facility quality improvement plans, which were submitted, along with contact notes, into this web-based system. Paths to QUALITY raters could also enter their data directly. The database was developed through a contract with TCC Software Solutions and included their subsidy and licensing data.

Maine QRIS System Linked Professional Development and Technical Assistance

Quality for ME, the QRIS in Maine, was a partnership of the state's professional development project called Maine Roads to Quality. The Quality for ME automated system included shared data linkages that populated forms with data from the professional development registry, the state licensing database, and National Association of Child Care Resources & Referral Agencies software. These automated data links minimized the amount of data entry required of an applicant; because an applicant had to confirm the information, the process resulted in more accurate data across these state systems. Maine was developing an automated technical assistance tracking system that would link to the professional development registry and enable individual providers to note on their transcript that they are receiving technical assistance on particular topics.

Michigan's Online Platform

STARS was Michigan’s Great Start to Quality QRIS online platform developed by Mosaic, Inc. Licensed and registered programs and providers interacted with the platform to complete their self-assessment survey, upload evidence documents, develop a quality improvement plan, and access resources. Administrative users validate self-assessment surveys and complete observations, log technical assistance efforts, and perform monitoring functions. STARS also offered administrative users reporting capabilities such as “at-a-glance” summary information, click-button reports, and export reports. Housing all functions on one platform supported Great Start to Quality to streamline each component of the QRIS.

Nevada's Integrated Data System

Nevada’s Silver State Stars adopted an integrated Q-Star data system that allowed the following QRIS teams to communicate and track progress: 1) the administration and quality rating team, which managed the project and assigned star ratings; 2) the assessment team; 3) the coaching team, which used the system to track their activities and build quality improvement plans based on the assessments conducted; and 4) the research and evaluation team, which used the data gathered by the system to analyze the efficacy of services delivered and quality improvement over time. The Q-Star system linked to an Environment Rating Scales (ERS) data system for mobile assessment used to conduct ERS assessments and EasyFolio, which served as a portal for program applicants to manage their application process. The Q-Star system was developed for Nevada by the Branagh Information Group (BIG).

North Carolina QRIS Data Collection Guided Evidence-Based Adjustments

The North Carolina Division of Child Development and Early Education collected data for many years to monitor the Star Rated License system process and used this data to guide revisions in the system. Early on, results from Environment Rating Scales (ERS) assessments showed significantly lower scores on the infant/toddler ERS than on other classroom assessments. To address this concern, the state developed a short-term technical assistance project focused on providing child care health consultants to programs and a long-term technical assistance project that involved adding infant and toddler specialists to the child care resource and referral (CCR&R) agencies. School-age specialists and behavioral specialists were also added to the CCR&R agencies to help with program improvements. Orientation of providers to the ERS was added to the system as well. Similarly, when data indicated that the licensing compliance standard in the QRIS was not linked to statistically significant differences in quality, this rating standard was eliminated from the QRIS.

 

Pennsylvania Used Integrated Data Systems in Support of a QRIS

Pennsylvania Enterprise to Link Information for Children Across Networks (PELICAN) was an integrated child and early learning information management system. In addition to automating and centralizing many of the functions required to administer the subsidized child care program (Child Care Works), it expanded to automate the inspection and certification (licensing) process of child care providers, the administrative processes and data collection efforts for the PA Pre-K Counts and Pennsylvania Early Learning Keys to Quality (Keystone STARS) initiatives, and the data collection and analytics to support the Early Learning Network (ELN), which is a longitudinal database and tracking system for children in Pennsylvania early learning programs. PELICAN users included Child Care Information Service agencies, County Assistance Offices, Regional Keys (the administrators of the Keystone STARS program), PA Pre-K Counts grantees, as well as teachers and administrators for Head Start state supplemental programs, school districts that provide prekindergarten, providers of child care, and others. Families were also able to screen themselves for potential eligibility for child care subsidies, search for providers, and apply for services online. This initiative allowed the Office of Child Development and Early Learning and the Regional Keys to track providers, manage STARS, identify resources that were deployed at a program, and manage STARS grant information in the Keystone STARS rating system. A next phase of development would focus on how to integrate the existing trainer and training registry, the Pennsylvania Quality Assurance System, into PELICAN and make it accessible from the PA Key website.

An overview of the ELN is available in A Look at Pennsylvania’s Early Childhood Data System.

Rhode Island QRIS Evaluation: A Unique Partnership Focused on Informed Revision

A broadly representative community-based group developed the draft standards and quality criteria for BrightStars over several years. Researchers from the Frank Porter Graham (FPG) Child Development Institute at the University of North Carolina, who were selected for their depth and breadth of expertise and experience in evaluating program quality, conducted a pilot and random sample evaluation. The evaluation was conducted as a partnership between FPG and the community agency that manages BrightStars, Rhode Island Association for the Education of Young Children. This partnership facilitated training of BrightStars staff to collect data in a valid and reliable manner. The draft center framework included 62 criteria across 28 standards. The evaluation in the pilot revealed that using all 62 criteria resulted in small quality distinctions, and many programs had no stars or only one star. A review of the standards ensured that each criterion 1) was not already in state licensing, 2) had an actual outcome, and 3) adequately measured the differences in quality. This review pared the number of criteria down to 22, which were then grouped into nine standards. The final frameworks were  an effective scaffold for quality improvement; differences between the levels were meaningful but achievable. The evaluation not only improved the BrightStars standards and measurement tool; it also provided a baseline measure of program quality in a random sample of centers, homes, and afterschool programs in Rhode Island, which will be useful for tracking progress in the future. It has also been helpful to have expert evaluators give the Steering Committee specific advice and recommendations to improve the framework. 

Tennessee QRIS Data Collection System Provided Monthly Geographic Data

Tennessee used the state Regulated Adult and Child Care System (RACCS) to maintain QRIS data. The system included the provider’s Star-Quality Child Care Program rating and Child Care Report Card System component scores by program year. Users could request provider QRIS information for the entire state or by specific geographic region. The data system automatically generated monthly reports on ratings by provider type and county. The RACCS system also included various provider-specific program data, updated annually, that could be queried by accreditation, curriculum, enrollment, environment, fees, meals, Paths to QUALITY program, rates, rate policy, schedule, staff, and transportation.

Tennessee's Assessment Data System Also Supported Technical Assistance

The University of Tennessee Social Work Office of Research and Public Service (SWORPS) created an automated system to maintain statewide data on early childhood program assessments. When SWORPS received the completed observation score sheets from Department of Human Services assessors, the assessment data were entered into the Star-Quality Child Care Program database along with supplemental data (teacher and classroom or family child care home characteristics). The system generated a provider profile sheet that contained assessment information including item, subscale, and observation scores and an overall program assessment score. The system also generated a “strengths page” for the provider that detailed the indicators that the assessor scored positively. The provider received a copy of the profile sheet, the strengths page, and the assessor’s notes. Copies of these documents were also mailed to the relevant licensing unit for completion of Child Care Report Card scoring and entry into the Regulated Adult and Child Care System. A duplicate copy of the assessment results was mailed to the relevant child care resource and referral agency site. The Stars database generated monthly, quarterly, yearly, and ad hoc reports and analyzed the data in a multitude of ways.

The Virgin Islands Used Data to Inform the Development of Standards in Its QRIS

The Virgin Islands (VI) launched a pilot of their QRIS, Virgin Islands Steps to Quality (VIS2Q) in the summer of 2013. In developing their QRIS, the VI looked back to local studies and data for inclusion in the standards and considered how they graded some of the indicators. Their goal was to find a better fit for the VI context and support continuous quality improvement by helping programs view movement throughout the QRIS as something attainable. The literacy standards were influenced by the kindergarten entry data collected by the VI Department of Education, as this was an area where children were scoring most poorly. They also included a standard for Dual Language Learners because they know they have increasing numbers of children for whom English is not their first language. In the area of professional development, they graded the steps knowing where most of their teachers would be when their programs entered the QRIS, based on data from a workforce study conducted for the Department of Human Services. Data were also used to inform the indicators in the Learning Environments Standard, based on a pilot study of quality in VI early childhood education settings conducted in 2009. This data helped inform cut-off scores for both the Environment Rating Scales and Classroom Assessment Scoring System (CLASS) assessment tools.

Caronongan, P., Kirby, G., Malone, L., & Boller, K. (2011). Defining and measuring quality: An in-depth study of five child care quality rating and improvement systems (OPRE Report No. 2011-29). Retrieved from http://www.acf.hhs.gov/programs/opre/cc/childcare_quality/five_childcare/five_childcare.pdf

Child Care & Early Education Research Connections. (2018). Quality rating and improvement system state evaluations and research. Retrieved from http://www.researchconnections.org/childcare/resources/30046/pdf

Child Care & Early Education Research Connections. (n.d.). Working with administrative data [Web page]. Retreived from https://www.researchconnections.org/content/childcare/understand/administrative-data.html   

Child Trends. (2013, March 20). Overview and application of the INQUIRE data tools to support high quality early care and education data. Early Childhood Data: Building a Strong Foundation Webinar Series. Retrieved from http://www.researchconnections.org/content/childcare/federal/inquire.html

Child Trends. (2013, May 6). Data management webinar: Developing data governance structures. Early Childhood Data: Building a Strong Foundation Webinar Series. Retrieved from http://www.researchconnections.org/content/childcare/federal/inquire.html

Child Trends. (2013, May 16). Data management webinar: Best practices for producing high quality data. Early Childhood Data: Building a Strong Foundation Webinar Series. Retrieved from http://www.researchconnections.org/content/childcare/federal/inquire.htm

Child Trends & Mathematica Policy Research. (2010). The child care quality rating system (QRS) assessment: Compendium of quality rating systems and evaluations. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/compendium-of-quality-rating-systems-and-evaluations

Downer, J., & Yazejian, N. (2013). Measuring the quality and quantity of implementation in early childhood interventions (OPRE Research Brief 2013-12). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/measuring-the-quality-and-quantity-of-implementation-in-early-childhood

Illinois Early Learning Council Data, Research, and Evaluation Committee. (2015). Research agenda. Retrieved 
from https://oecd.illinois.gov/content/dam/soi/en/web/oecd/earlylearningcouncil/dre-core-documents/dre-research-agenda-working-copy-as-of-121215.pdf 

Lugo-Gil, J., Sattar, S., Ross, C., Boller, K., Tout, K., & Kirby, G. (2011). The quality rating and improvement system (QRIS) evaluation toolkit (OPRE Report No. 2011-31). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/cc/childcare_quality/qris_toolkit/qris_toolkit.pdf

Paulsell, D., Austin, A. M. B., & Lokteff, M. (2013). Measuring implementation of early childhood interventions at multiple system levels (OPRE Research Brief 2013-16). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/measuring-implementation-of-early-childhood-interventions-at-multiple

Paulsell, D., Tout, K., & Maxwell, K. (2013). Evaluating implementation of quality rating and improvement systems. In T. Halle, A. Metz, & I. Martinez-Beck (Eds.), Applying Implementation Science in Early Childhood Programs and Systems. (pp. 269–293). Baltimore, MD: Paul H. Brookes Publishing Co.

Tout, K., & Starr, R. (2013). Key elements of a QRIS validation plan: Guidance and planning template (OPRE Report No. 2013-11). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/key-elements-of-a-qris-validation-plan-guidance-and-planning-template

Tout, K., Zaslow, M., Halle, T., & Forry, N. (2009). Issues for the next decade of quality rating and improvement systems (Publication No. 2009-14, OPRE Issue Brief No. 3). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/issues-for-the-next-decade-of-quality-rating-and-improvement-systems

Wasik, B. A., Mattera, S. K., Lloyd, C. M., & Boller, K. (2013). Intervention dosage in early childhood care and education: It’s complicated (OPRE Research Brief 2013-15). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from http://www.acf.hhs.gov/programs/opre/resource/intervention-dosage-in-early-childhood-care-and-education-its-complicated

Zellman, G. L., & Fiene, R. (2012). Validation of quality rating and improvement systems for early care and education and school-age care (Research-to-Policy, Research-to-Practice Brief OPRE 2012-29). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. Retrieved from https://www.acf.hhs.gov/sites/default/files/opre/val_qual_early.pdf

Burchinal, P., Kainz, K., Cai, K., Tout, K., Zaslow, M., Martinez-Beck, I., & Rathgeb, C. (2009). Early care and education quality and child outcomes (Publication No. 2009-15, OPRE Research-to-Policy Brief No. 1). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/early-care-and-education-quality-and-child-outcomes

Child Trends. (2009). What we know and don’t know about measuring quality in early childhood and school-age care and education settings (Publication No. 2009-12, OPRE Issue Brief No. 1). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services.
http://www.acf.hhs.gov/programs/opre/resource/what-we-know-and-dont-know-about-measuring-quality-in-early-childhood-and

Elicker, J., & Thornburg, K. R. (2011). Evaluation of quality rating and improvement systems for early childhood programs and school-age care: Measuring children’s development (Research-to-Policy, Research-to-Practice Brief OPRE 2011-11c). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/cc/childcare_technical/reports/improv_systems.pdf

Friese, S., King, C., & Tout, K. (2013). INQUIRE data toolkit (OPRE Report No. 2013-58). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/inquire-data-toolkit

Kirby, G., Boller, K., & Zaven, H. (2011). Child care quality rating and improvement systems: Approaches to integrating programs for young children in two states (OPRE Report No. 2011-28). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/cc/childcare_quality/two_states/two_states.pdf

Malone, L., Kirby, G., Caronongan, P., Tout, K., & Boller, K. (2011). Measuring quality across three child care quality rating and improvement systems: Findings from secondary analyses (OPRE Report No. 2011-30). Washington, DC: U.S. Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/cc/childcare_quality/measuring_three/measuring_three.pdf

Mitchell, A.W. (2005). Stair steps to quality: A guide for states and communities developing quality rating systems for early care and education. Alexandria, VA: United Way of America. https://www.researchconnections.org/childcare/resources/7180 

Smith, S., Dong, X., Stephens, S., & Tout, K. (2017). How studies of QRIS measure quality improvement activities: An analysis of measures of training and technical assistance. https://www.researchconnections.org/childcare/resources/35007/pdf

Tout, K. (2013). Look to the stars: Future directions for the evaluation of quality rating and improvement systems. Early Education & Development, 24 (1), 71–78.

Tout, K., Starr, R., Wenner, J., & Hilty, R. (2016). Measures used in quality rating and improvement systems (QRIS) validation studies. (OPRE Research Brief #2016-110). Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/sites/default/files/opre/cceepra_measures_in_qris_validation_studies_1222_508.pdf

Tout, K., Zaslow, M., Halle, T., & Forry, N. (2009). Issues for the next decade of quality rating and improvement systems (Publication No. 2009-14, OPRE Issue Brief No. 3). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/issues-for-the-next-decade-of-quality-rating-and-improvement-systems

Zaslow, M., Tout, K., Halle, T., & Forry, N. (2009). Multiple purposes for measuring quality in early childhood settings: Implications for collecting and communicating information on quality (Publication No. 2009-13, OPRE Issue Brief No. 2). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/multiple-purposes-for-measuring-quality-in-early-childhood-settings

Zaslow, M., Tout, K., & Martinez-Beck, I. (2010). Measuring the quality of early care and education programs at the intersection of research, policy, and practice (Research-to-Policy, Research-to-Practice Brief OPRE 2011-10a). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/cc/childcare_technical/reports/quality_measures.pdf

Zellman, G. L., Brandon, R. N., Boller, K., & Kreader, J. L. (2011). Effective evaluation of quality rating and improvement systems for early care and education and school-age care (Research-to-Policy, Research-to-Practice Brief OPRE 2011-11a). Washington, DC: Office of Planning, Research and Evaluation (OPRE), Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/cc/childcare_technical/reports/quality_rating.pdf 

Zellman, G., & Perlman, M. (2008). Child-care quality rating and improvement systems in five pioneer states: Implementation issues and lessons learned. Santa Monica, CA: RAND Corporation. http://www.rand.org/content/dam/rand/pubs/monographs/2008/RAND_MG795.pdf