Pilot Test of the Instrument
A small pilot test (n = 28 students, parents, and educators participating in a summer AEL project conference) was conducted in 1996, with encouraging results. This sample represented individuals much like those in any school community, and the results indicated that the instrument can be applied to a spectrum of people who have varying experiences and involvement in a learning community of professionals.
It is important to assess the reliability or consistency of an instrument. There are two types of reliability: internal consistency (e.g., Cronbach's Alpha) and stability (test-retest). For the pilot test, Cronbach's Alpha reliability for the total of the 17 items was + .92. There is general agreement that + .75 or above indicates appropriate instrument internal consistency.
The test-retest measures stability over time and the reliabilities for the 15 participants who could be matched with individual ID numbers was + .94. The correlation of the total score of this instrument with the total score of a school climate instrument titled the "School Climate Questionnaire" (Manning, Curtis, & McMillen, 1996) and deemed to assess similar characteristics was + .82.
This pilot test of the instrument in the AEL region with a small heterogeneous group suggested that the instrument possessed pyschometric properties sufficient to continue its use, but a field test with a larger sample of schools was required.
Field Test of the Instrument
The field test was designed with three objectives for study:
- to assess the reliability of the professional learning community instrument,
- to assess the validity of the professional learning community instrument, and
- to draw conclusions about its use in educational improvement efforts at the school level.
The sample for the study included all the teachers in 21 schools in AEL's four-state region who completed and returned the instrument. The schools volunteered to participate in the study with no external rewards or motivation offers. These schools were nominated to participate usually through the building principal or other contact persons familiar with the school and its staff. A total of 690 teachers completed and returned the instrument.
The field test schools were in Kentucky, Tennessee, Virginia, and West Virginia. The schools represented the elementary level (n = 6), middle/junior high (n = 6), and high school (n = 9). The schools' student enrollment ranged from a low of 205 to a high of 1,200. The percent of students on free and reduced lunches in the 21 districts ranged from a low of 12% to a high of 39%, with a mean of 22.5%.
A subsample of teachers in four large high schools in Tennessee were involved in the AEL project noted above (the 4 high schools were included in the 21 schools in the total sample). They volunteered to participate also in the concurrent validity and stability (test-retest) reliability analyses by (1) completing a school climate instrument at the same time and (2) including an individual identification number on their instruments, for purposes of the retest. The number of teachers in the high schools was 53, 57, 61, and 60.
The four high schools are in the same district. The district's student population is 99% Caucasian, with 13% on free or reduced lunches. It is reported that 64% of these high school students are college-bound, a figure based on the percentage of the 1996 graduating class that enrolled in two- or four-year colleges.
Finally, in addition to being used in the 21 AEL region schools in the field test, the instrument was administered to the school staff known from previous research and described to be operating as a professional learning community (the school referred to in the first section of this paper). This school, a "known group" for the construct validity analysis, is an urban school of 23 teachers and about 400 students in the New Orleans school district. The instrument was administered to this school's staff as part of the field test. Nineteen copies of the instrument were sent to AEL for the "known group" analysis, but not every teacher completed every item.
Analyses of the instrument began with a file of the 690 teachers in the 21 schools, with files of data from the 4 high schools, and with the file of the "known group." The analyses of these files are presented below in paragraphs describing the descriptive statistics, the reliability analyses, and the validity analyses. All of the analyses were completed at AEL, using the SPSS statistical analysis software package.
Descriptive analysis of the 690-case file was the first step completed. All of the descriptive statistics for the 17 individual instrument descriptor items and the total score were computed. Next, those same descriptive statistics were computed by school level — elementary, middle/junior high, and high school. Then, as one measure of the usability of the instrument, these same descriptive statistics were computed for the 21 different schools in the field test.
Based on the descriptive statistics from the instrument with 21 schools in the AEL region and using mean scores, the instrument does differentiate among all the schools. When the schools are subgrouped into three levels — elementary, middle/junior high, and high school — the instrument also differentiates the school faculties in terms of their development as professional learning communities.
Reliability analyses consisted of two types — internal consistency and stability (or test-retest).
First, the internal consistency reliability coefficient, using Cronbach's Alpha formula, was computed for the total instrument. The Alpha reliability coefficient was computed on the main file of 690 cases, although not all teachers completed all items: it was .94. Next, the instrument's Alpha reliabilities were computed for the 21 individual schools in the field test. These analyses were conducted to assess the reliabilities at the level of intended use — the individual school. These Alphas ranged from .62 to .95, with one in the .60s, none in the .70s, 7 in the .80s, and 13 in the .90s.
The instrument yielded satisfactory internal consistency (coefficient Alpha) reliabilities for the total instrument in the field test. These satisfactory Cronbach's Alpha reliabilities were evident at both the full group and the individual school level. There was no pattern in the Cronbach Alpha reliabilities by the three levels — elementary, middle/junior high, and high school.
Second, the stability (test-retest) reliability coefficient was computed with the subsample of four high school faculties in Tennessee. Because of a problem in matching unique identification numbers, the number of cases that were usable was low (n = 23). Even though the coefficient of stability reliability value was computed on a smaller than ideal subsample, the resulting value for the total instrument score (.6147) was marginally satisfactory, with the potential to increase, or decrease, if the sample size were to increase.
Validity analyses consisted of three types — content, concurrent, and construct (two methods).
First, content validity (checking that the content is appropriate) was assessed at three stages: during the development, early review, and modest reformatting of the instrument. In the first stage, the content of the five dimensions was established by the author from her review of the educational and business/corporate literature (Hord, 1997), plus her field research with southwest U.S. schools that functioned as professional learning communities. The second stage of the content validity assessment was conducted by three AEL staff as they independently reviewed the five dimensions and 17 descriptors. They modestly reformatted the instrument after reaching consensus on wording to gain additional clarity and consistency. AEL sent the reformatted instrument to the author, and the third stage of content review was completed when the author assessed the minor word changes and confirmed that the reformatting was consistent with the original intentions for the instrument. Based on the three stages of the review of the items in the instrument, the instrument was judged to possess sufficient content validity for its original intention of measuring the concept of a community of learners within the professional staff of K-12 schools.
Second, concurrent validity (comparing the instrument with another purporting to measure the same concept) was assessed by administering a school climate instrument. With respect to the concurrent validity, the instrument possesses satisfactory correlation with the school climate instrument used in the field test with a subsample (n = 114) of four high school faculties (the correlation between the 17-item field test instrument and the 10-item school climate instrument was .7489, significant at the .001 level).
Third, construct validity asks the question, Does the instrument measure the psychological construct called "professional learning community"? The "known group," noted earlier in this paper, was the first method used for construct validity analysis. The scores of the teachers in the school that was known from previous research to be functioning as a professional learning community were compared to the scores of the 690 teachers from the 21 schools in the field-test database.
The 21 AEL schools were volunteer schools, and no assumptions were made as to whether or not they were schools of professional learning communities; no data were available to support or to refute that. The purpose of this construct validity check was to assess the difference of the scores from the known-group teachers with the scores from all other teachers in the main database with the t-test. The higher scores from the teachers in the school that was known to be a learning community of professionals differ significantly (.0001) from those of the teachers in the field test. Using the known-group methodology, the instrument appears to represent the construct of a mature professional learning community.
Last, factor analysis, the second method of construct validity analysis, included unconstrained principal components analysis followed by both varimax and oblique rotations of the data. The final factor analysis solution was an iterative process of comparing the before-rotation data with the after-rotation data, then going back to the descriptive statistics on the scores, and including their distributions. Based on factor analysis results, it appears that the 17-item instrument represents a unitary construct of a professional learning community within schools.
Next Page: Conclusions