Drawing on this professional knowledge base, we developed a matrix to guide the design of a prototype tool for evaluating the e dimension of the early childhood e-book. This matrix provided a framework for organizing the tool (categories & elements) and specified criteria for judging the electronic environment. We used the Quality Rating Tool 1 with a set of internal raters, consisting of 4 teachers, and also external raters, who were members of our research team.
Results from Quality Rating Tool 1 indicated that we were on the right path, but there was still one missing piece. Our big problem was with reliability. Quality Rating Tool 1 presented challenges in establishing inter-rater reliability, which we determined was due to fuzziness in directions and the technical nature of the vocabulary. So, what did we do? We took these findings and began to work towards further refinement of the tool for evaluating the e dimension of the early childhood e-book.
In Quality Rating Tool 2, we continue to focus on the three main categories from the previous tool, however we have shifted to a branching model to clarify and further probe the various elements within each category. This multi-layered tool is now focused on providing more explicit definitions of categories and “teacher friendly” explanations for each criterion.
In the week leading up to my AERA 2011 Annual Meeting presentation, I will be sharing parts of the presentation on Raised Digital. The paper I will be presenting focuses on the development of my e-Book Quality Rating Tool and is part of a symposium titled E-Books as Instructional Tools in Preschool Classrooms: Promises and Pitfalls. The symposium will take place Saturday, April 9, 2011 from 2:15pm – 3:45pm, at the Doubletree/Madewood B.