Structure of the Instrument

The initial instrument was titled "Descriptors of Professional Learning Communities" and consisted of 17 descriptors grouped into five major areas or dimensions identified from the literature review (Hord, 1997). The five dimensions were:

  1. the collegial and facilitative participation of the principal, who shares leadership (and power and authority) and decision making with the staff (with two descriptors);
  2. a shared vision that is developed from the staff's unswerving commitment to students' learning and that is consistently articulated and referenced for the staff's work (with three descriptors);
  3. learning that is done collectively to create solutions that address students' needs (with five descriptors);
  4. the visitation and review of each teacher's classroom practices by peers as a feedback and assistance activity to support individual and community improvement (with two descriptors); and
  5. physical conditions and human capacities that support such an operation (with five descriptors).

The 17 descriptors were organized to illuminate the dimensions and were distributed unevenly (as noted above) across the five dimensions. The descriptors were designed as a series of three statements structured along a continuum that would reflect most desirable or more mature practice of the descriptor to least desirable or less mature. For example, under the first dimension noted above, "collegial and facilitative participation of the principal, who shares leadership . . . through inviting shared decision making from the staff," one of the descriptors is presented as a series of three statements along a continuum:

  • Administrator(s) involves the entire staff.
  • Administrator(s) involves a small committee, council, or team of staff.
  • Administrator(s) does not involve any staff.

These statements were to differentiate the high, middle, and low parameters of the descriptor along a five-point scale. The format and layout of the instrument required the respondent to read all three indicators for each of the 17 descriptors and then mark the response scale. This format required more mental processing than usual for a selected-response, Likert-type instrument, but contributed much to the use of the instrument as a screening or filtering device (see Figure 1).

Figure 1: Format of the Instrument

2. Staff Shares visions for school improvement that have an undeviating focus on student learning and are consistently referenced for the staff's work. 5-point scale showing what level of agreement staff have about visions for improvement.
3. Peers review and give feedback on observing each other's classroom behaviors in order to increase individual and organizational capacity 5-point scale showing how often staff visit and observe their peers' classrooms.

A Note to the Readers

As noted earlier in this paper, the instrument was shared with staff at AEL. Study, conversation, and other interaction between the SEDL qualitative researcher and the AEL quantitative evaluator resulted in a working agreement: the SEDL instrument would be made available and used extensively in diverse school settings, and AEL would conduct the statistical processing to test the instrument and assess its psychometric properties.

At this point it is important to provide a note to our Issues readers. The purpose of this paper is awareness — to let our colleagues know that the instrument exists. Because of the broad clientele of our audience, an effort has been made:

  1. to keep the language and terminology reasonably understandable to those who are not experts in instrument design and testing, but also

  2. to present information about the instrument in such a way that it is credible to those who are instrument aficionados and quantitatively oriented. The challenge has been to serve these two purposes.

Those readers who are not interested in the psychometric testing of the instrument, may skip to page 7 for Conclusions about the statistical tests. Those who are more keen about psychometrics, should read on, understanding that it is not the intention of this Issues paper to present the full range of procedures and results conducted in the field test on the instrument. Those interested in the full report may see Meehan, Orletsky, and Sattes (1997). Those concerned about the robustness of the field test, its procedures, and reporting, should review FY97 Report: External evaluation of the Appalachia Educational Laboratory (1998).

In this report, external evaluators scored the AEL field test study with two ratings of "outstanding" (on Utility and Accuracy) and two ratings of "satisfactory" (on Feasibility and Propriety), based on national evaluation standards.


Next Page: Pilot Test of the Instrument

Published in Issues ...about Change Volume 7, Number 1, Assessing a School Staff as a Community of Professional Learners (1999)