By Sara Sawtelle
The Keys to Implementing New Technology
While Providing Evidence that Technology is Successful
Proving that technology works… It is not as simple as proving that a new vendor for art supplies is more cost effective. Technology effectiveness requires both the right software and the right implementation. Just having the software is not enough. Proper planning, training, leadership, support, pedagogy, and software usealong with many other factors will determine whether the technology works. When so many factors can affect the outcome, we need a proven protocol, as almost every administrator, principal, and teacher will agree. But still many technology implementations are less than successful. As a scientist and a teacher, I believe that we can borrow from the Scientific Method and from some common-sense approaches to increase our chances of success.
Most people think of the Scientific Method as a set of five steps to use when creating a science experiment. But it has broader application to any situation where the steps of gathering observable, measurable evidence, using good objective reasoning to evaluate outcomes, and then repeating the cycle are warranted. Such a process is essential to show the strengths, weaknesses, and return on investment for any program a school chooses to use.
The Software and Information Industry Association has released a checklist that includes 10 essential concepts that underlie all effective implementations. (Editors Note: A list of the 10 concepts begins on page 14. The list is available at http://www.siia. net/education/pubs/pp_Checklist.pdf.) These concepts embody the things I try to instill into everyone I help with implementation. In addition, SIIA also has a toolkit to help schools work through the process of an implementation of any software, administrative or instructional.(http://www.siia.net/education/foreducators/toolkit_0407.pdf). While we can list the concepts, its easiest to see how they apply by using astory. This story involves a fictitious school district, Valley District, which has four elementary schools, two middle schools, and one high school. As the story begins, two fourth grade teachers at one of the elementary schools discover a new reading software program. The teachers feel strongly that this program would be better than the one currently being used that dates from the 1980s. They believe this new program would really reach the students, getting them further engaged in their own learning. So, the teachers (Ms. Crum and Mr.Pappelwith) approach their principal about switching to it. Principal Leaven likes the idea but reminds them that the school board is demanding evidence that something is truly a benefit to the students of Valley District before expenditures will be authorized. Ms. Crum and Mr. Pappelwith immediately begin talking about what they can do to meet this requirement.
Stage 1: Create the Hypothesis
Principal Leaven, Ms. Crum, and Mr. Pappelwith meet with Mrs. White, the district elementary school director of curriculum and instruction. The four decide to move forward. They know they need to develop a plan to determine if the new reading program will help the students learn more effectively than the current program.
Stage 2: Planning the Experiment
They recognize that planning is not as simple as one person sitting at a desk and mapping it all out; it will take a committee of the right people asking the right questions. They decide to solicit questions from others to make sure they haven’t missed any important considerations. Once the questions are on the table, the committee will need to come to an agreement about how to handle those questions and keep good records of the process.
Another important part of the planning stage is leadership (SIIA concept 5). The principal and teachers are aware of the need to make sure that the projects leaders have the ability and willingness to change things as they need to be changed. In a district setting, this could be several people including a district-level leader and a school leader.
So our Valley District Reading Program committee begins asking questions. They each talk about the proposed program in various meetings. Who should we involve in this process? What does effective mean? How do we define it? What teachers/ classes should be involved? How is it recommended to be used? How can we integrate it into the class? What can we use to evaluate the results?
These lead to more questions. How big do we need to start? Who are the stakeholders? What group can we use as a study group? What group can we use for a control group? Is this the right group to be involved in the planning? Who will be strong leaders for this project?
Because Ms. Crum and Mr. Pappelwith are both teachers in the same school and the same grade, they are the obvious school leaders and those who will be most committed to working through any difficulties that they run into. The district decides to implement the new software in one of the two classes, which will allow them to compare the two approaches. Although the experiment is not a strict random control and study group, it will allow for a comparison with known parameters. As long as they don’t make claims beyond those parameters, the setup should allow for a good initial investigation and may provide information to move forward. It also provides an effective way to make adjustments. It is easier to adjust with a smaller number of students, figure out the best approach with one class, and then expand to others.
The district curriculum and instruction director is also a good choice for leadership at the district level. Because this is a software program, we need to get the IT department involved and make sure we communicate with the software vendor. We don’t want any technological problems to interfere with the project. The people that need to stay closely informed are the other teachers in the district, the principal, the students, and parents.
The project team agrees that the communications to the stakeholders must cover why they will be using it with one class and not the other. The students, parents, and other teachers need to see that the plan is well conceived to demonstrate the programs effectiveness by minimizing the number of things that are being changed. As a school and a district, the team will be able to use this project to broaden the education of the students about good decision making in life.
Now that we have a broad-stroke plan and have determined who the stakeholders are (SIIA concept 3), we can address some other questions, such as what effective means and how we will evaluate the program (SIIA concept 4). Regardless of what the evaluation measures are, the team has to be prepared to look objectively at what the evaluation means.
As it turns out in Valley School District, records are plentiful. The students at every grade are assessed by a district exam at the beginning and end of each year as well as by state exams. Test scores and other student data can be compared between the two classes. We can also add more subjective data such as student and teacher surveys. For each of the methods determined to be important for evaluation methods, the planning committee wisely determines criteria that they need to see reached in order for the new software implementation to be considered successful. Knowing ahead of time what the criteria are will prevent the results from clouding judgment when evaluating the successes of the program, and will help determine which results are most important to consider, particularly because this study will only provide a comparison at one school without a strict random assigned control.
When it comes time to install the software, the team looks carefully at the IT infrastructure of the school (SIIA concept 6: environment and equipment must match the requirements). Close communication with the IT department and the software vendor will be critical to the experience. This ensures that the computers are appropriate for the design specs, that access is appropriate for the software, and that the teachers follow the software designers guidelines. Making sure that the software is used as recommended adds a level of validity to the project and the evaluation.
Stage 3: Preparation
Now the project team is ready to begin training the stakeholders. (SIIA concept 7: adequate training) The team schedules training for Ms. Crum, Mr. Pappelwith, and Ms. White. The training enables them to learn as much as possible about the software, about the project, and what the expected outcomes are. Ms. Crum and Mr. Pappelwith hold a meeting with the other teachers in their school and also one with the parents, making everyone aware of what they are trying to accomplish. At the training, they also address student prerequisites for the software to be successful (SIIA concept 8: prerequisite knowledge and skills). This could include computeruse, but also includes what the software expects the students to know. This will help with the success and the full integration into the curriculum itself. They know that for the software to be effective it needs to integrate into what they are doing as seamlessly as possible.
At this point, Valley School District is ready to begin the implementation!
Stage 4: Testing the Predictions
Documentation throughout the project is important and the implementation is no exception. All teachers know that it is rare that something ever works according to the initial plan; adjustments during the pilot and recording those adjustments are critical t being able to ramp it up later. Ms. Crums class is the study class and Mr. Pappelwiths is the control, butthey are not acting as islands. They will communicate frequently throughout the implementation as things get adjusted and new discoveries are made; this will help in the ultimatesuccess. Ms. Crum works closely with Mrs. White to make sure that every- thing is going smoothly and any snags are handled expeditiously.
Stage 5: Adjustments
Now the school year is over, and Ms. Crum is very excited to see the outcome. She really thought that the number of students engaged in the material this year was higher, but will the evaluation bear witness to that effect? How are the scores? What is the comparison between the classes at all levels? Are there additional adjustments to be considered? What is next?
The project team gathers the data, and the assessment begins. The study results show a modest increase in reading scores with the new program. Ms. Crum and Mr. Pappelwith did a good job of documenting what they did. Data on behavior and attendance, as well as more subjective survey results from the students in both groups, show that switching really is worth considering. The material does seem to engage the students better, and it is more relevant to their lives. In addition, teacher documentation showed it was easier to integrate into the reading curriculum. All their evaluative measures seem to meet or beat their previously set criteria.
After looking at the results, the team decides that a larger implementation is warranted and they make the recommendation to the district, which then decides to use this new program with two of the four elementary schools next year, with the plan to expand to the remaining schools the next year.
Each of these implementations will need to be monitored like the pilot to make sure that as the number of variables (such as students and teachers) increase, that the methodology still works. The best implementation includes a continual review of effectiveness. In fact, the team needs to bear in mind that the evaluation never really ends. That includes making sure that the software continues to be used as it is recommended and that it is integrated into what the teachers are doing in the classroom.
The results can now be shared with the stakeholders, used to build the next stage of implementation and evolution of best practices. The district can proceed with confidence in rolling out the new program based on evidence within the boundaries of the investigation they performed, a clearly documented process, and engagement of all the stakeholders. In short, the school has modeled the Scientific Method for its students in the choice and implementation of the learning programs they experience and has demonstrated that the software works for them in ways that are meaningful to everyone.
Sara M. Sawtelle, PhD, taught chemistry and served as an instructional technology administrator. As director of scientific affairs for Learning Enhancement Corporation, she conducts research andhelps clinicians and schools implement BrainWare Safari, a program that develops the cognitive skills most critical for learning in an engagiing video-game format. She was a contributing author for the SIIAs Software Implementation Toolkit for Educators.
June/July 2008 | Learning & Leading with Technology 15
© BrainWare Learning Company | All Rights Reserved.