Factors that Influence Science Teachers' Selection and Usage of ...

4 downloads 140136 Views 373KB Size Report
Apr 19, 2014 - Journal of Science Education and Technology ... The online version of this article (doi:10.1007/s10956-014-9493-9) contains supplementary ...
J Sci Educ Technol (2014) 23:668–681 DOI 10.1007/s10956-014-9493-9

Factors that Influence Science Teachers’ Selection and Usage of Technologies in High School Science Classrooms Noemi Waight • Ming Ming Chiu • Melinda Whitford

Published online: 19 April 2014  Springer Science+Business Media New York 2014

Abstract This study contributed to our understanding of those factors that serve as predictors of science teachers’ selection and use of technologies and more specifically, how selection and usage was realized among teachers of different science disciplines. Notable descriptive statistics were examined, and we tested an explanatory model of how demographics, school context, pedagogical approaches and professional development (PD) influenced the likelihood of a teacher using a tool via a multilevel cross-classification-ordered logit analysis (Goldstein 1995). The findings revealed that science teachers were more likely to use hardware than software; more specifically, this included instructional tools (i.e., SMARTboards, clickers) and laboratory tools (probeware). Differences in teachers’ use of tools were largely due to differences in tools as opposed to differences in teacher characteristics. Use of a tool was more likely by teachers who taught physics, who taught via inquiry, or who had more PD with a tool. These findings have implications for how we conceptualize selection and

Electronic supplementary material The online version of this article (doi:10.1007/s10956-014-9493-9) contains supplementary material, which is available to authorized users. N. Waight (&)  M. Whitford Department of Learning and Instruction, University at Buffalo, SUNY, 513 Baldy Hall, Buffalo, NY 14260, USA e-mail: [email protected] M. Whitford e-mail: [email protected] M. M. Chiu Department of Learning and Instruction, University at Buffalo, SUNY, 564 Baldy Hall, Buffalo, NY 14260, USA e-mail: [email protected]

123

usage of technologies that enter the science education pipeline; which tools become sustainable in the science classroom and how technological take-up differs across science disciplines. Keywords Technology selection and usage  Technological tools  High school  Science education  Evolution of technology

Introduction Reform documents (NRC 2012) and science education research reflect the rapid changes and expansion of technological implementation in the science classroom. These changes have in part been spurred by changes in popular culture and reform efforts that promote technology as an application of science and as a medium to improve understanding of the scientific process (NRC 2012). In science education, much of the research has focused on studies that document the teaching–learning reciprocity, and more broadly, the affective impact of technological implementation and use. For the former, studies have focused on the role of classroom agents; and associated pedagogical and cognitive processes (e.g., Bell and Trundle 2008; Sorensen et al. 2007); conceptual understandings of content (e.g., Adadan et al. 2010); and affordances and constraints of technologies (e.g., Roth et al. 1996). In the affective domain, studies have addressed student motivation and learning in science and technology (Baram-Tsabari and Yarden 2005; Mistler-Jackson and Songer 2000); potential of robotic telescopes to inspire interest in science (Beare 2007); teacher beliefs, attitudes and intentions of use of computer simulations (Zacharia 2003); and science teachers’ orientations and beliefs of technology-enhanced

J Sci Educ Technol (2014) 23:668–681

tools in the context of professional development (PD) (Campbell et al. 2014). These studies have contributed significantly to the nuanced understandings of technology implementation and classroom practice; however, amidst these narratives, very little has been documented about how science teachers select these technologies and the extent of use of these tools. More specifically, few studies have examined which factors influence science teachers’ selection and usage of technologies, and how selection and usage is realized among different science disciplines. This exploratory study addressed this gap. Zhao et al. (2002) highlighted that research related to technological implementation has uncovered both winners and losers. Conceptualizations of winners and losers were based on the effectiveness of technological tools—more or less effective than other tools and traditional instruction. However, those tools that emerged as winners were confronted with rapid technological advancement that ‘‘made most of the winners obsolete rendering early findings largely irrelevant to today’s research and development in educational technology’’ (p. 483). This discussion was relevant for this study because notions of winners and losers draw attention to those technological tools that are used and implemented versus tools that do not gain traction and instead become discarded. However, as noted, even winners are rendered obsolete with time. The implications suggest that we closely examine the life cycle of these technological tools in classrooms and identify those factors that inform how teachers cope with selection and use. In response, this exploratory study examined those factors that influenced science teachers’ selection and usage of technological tools for science teaching and learning. In addition, we also examined the patterns of selection and usage across science disciplines at the high school level (i.e., chemistry, biology, physics and earth science). The following questions guided this study: Which technological tools do science teachers select and use? What factors influence how science teachers select and use technological tools in science classrooms? Basalla (1996) explained that ‘‘the emergence of innovation and its subsequent selection and replication’’ (p. 187) are critical stages of the life cycle of technological tools. Surely, understanding factors that influence technological selection and usage inform on the stability and sustainability of particular kinds of tools for different science disciplines at the high school level. Thus, the contributions of this study are significant for several reasons. First, the fact remains that technological tools are experiencing shorter life cycles, so it is important to understand whether these tools are represented in the repertoire of tools that are actually used by science teachers. What is more, understanding selection and usage provided a context to evaluate if and how reform expectations associated with

669

implementation are realized across various science disciplines. Second, this study uncovered that there are distinct differences with selection and use of technological tools among different science disciplines. This offered an opportunity to examine if and how teacher education and PD addressed these specific needs. Finally, the findings of this study revealed that the major emphasis of selection and use was limited to instructional (e.g., SMARTboard) and laboratory tools (e.g., probeware1) and associated PD was rare. Surely, it would be important to investigate whether these patterns are limited to ‘‘technology-starved’’ locales or whether these patterns are representative of actual practice in secondary science classrooms.

Theoretical Framework Evolution of Technology The theoretical framework that guided this study drew from Basalla’s (1996) discussion of the evolution of technology where processes of selection, discard and extinction determine sustainability and replication of technological tools in context. To chart the evolution of technologies, Basalla (1996) identified how the evolving nature of human made technologies is characterized by (a) technological diversity, (b) continuity, (c) novelty and (d) selection. Notions of technological diversity—a broad range and volume of things created by humans—have led society to believe that diverse artefacts are created to help cope with physical phenomena and are necessary for survival and the basic necessities of life. In this context, the science education community has insisted that technological tools are necessary to improve teaching and learning in the science classroom. Since people in different epochs view different technologies as necessities, necessity is a constantly moving, unstable concept that differs across people, generations, social class and cultures. A utility for one group may function as luxury for another group. Needs are fundamentally contextual and embedded within value systems. Notions of continuity target widely held misconceptions that technological change is discontinuous and owing to ‘‘one-man’’ creations and inventions; perceptions that ultimately dismiss the complex nature of technologies and associated interactions. Using multiple case studies (e.g., stone tools, the cotton gin, steam engine), Basalla (1996) illustrated how technological development is in fact interconnected. The relevance for science education is visible in 1

Probeware in this context interfaced computers and provided access to real-time data and provided opportunities for inquiry-based learning (Tinker, http://concord.org/sites/default/files/pdf/probe ware_history.pdf).

123

670

the shifts that promote technological tools in the science classroom. Over time, we have witnessed periods of time when the radio, television, overhead projector, computerbased technologies, microworlds, hypertext media, handheld devices and more recently smart technologies (e.g., learning tablets) offered infinite possibilities for teaching and learning. Simultaneously, it has also been documented that euphoria for the potential of these tools is short lived. As we examine shifts of selection and use, it would be very important to understand how different technologies exhibit evolutionary continuity. In other words, for each new technological tool, are users (teachers and students) merely updating existing knowledge or are they subjected to new learning curves? The implications for understandings of continuity of technologies in science education suggest that it would be important to examine which technological tools share evolutionary similarities or divergent lineage. Teachers might be risk averse and rely on their prior knowledge when making decisions to use newer versions of older tools. Or, teachers might adopt radically new tools from different lineages to sharply increase their productivity (often requiring substantial learning to use it properly). Novelty enables production of new artefacts, while selection determines the transition of novel artefacts into new cultures, thereby creating opportunities for further innovation. Social and cultural factors influence selection. In this respect, Basalla documented several case studies showing how cultural factors influenced assimilation of innovations into cultural structures. For example, it is well known that some of the great changes of the renaissance period in Europe—printing, gunpowder and the magnetic compass—were initially Chinese products. However, ideological factors2 associated with the cultural values of the Chinese elite stagnated scientific and technological achievements. Thus, to expect that artefacts will have the same meaning and influence in other contexts is misleading. The relevance of this analysis for science education is to understand how social and cultural values of science teaching and learning influence teacher selection and use of technologies. In specific, it is important to understand how selection supported novelty of innovation. Finally, it is of relevance that Basalla (1996) also introduced how selection based on fads and fashion involved passing fads that prop up specific artefacts over others. This process involved moving on to the next new technology without ever resolving the full impact—for better or ill—of innovation. Fads and fashion as short-lived periods of engagement were also tied to values and ideologies. For this study, it would be important to understand if selection and use is related to fads and fashion or involve 2

See Basalla’s discussion of Joseph Needham analysis of the Chinese society and government (pp. 174–176).

123

J Sci Educ Technol (2014) 23:668–681

tools that enjoy greater stability in the science classroom. Surely, it would be very telling to understand how factors that determine selection and use inform how science teachers’ values favor traditional structures or alternatively, prefer more current technologies. Science Teachers’ Uses of Technology Few quantitative studies have examined large samples of science teachers’ experiences with technology. These studies have addressed technology use and knowledge (Odom et al. 2002); discrepant implementation of technology based on existing practice and beliefs (Yerrick and Hoving 1999); teacher beliefs about educational technology in science classrooms (Czerniak et al. 1999); science teachers’ attitudes, computer uses and student learning (e.g., Ng and Gunstone 2003) and the relationship of beliefs, attitudes and intentions in the context of computer simulations and inquiry-based experiments (Zacharia 2003). Odom et al. (2002) documented large gaps between AETS teachers’ current and desired levels of knowledge and instructional uses of technology. These gaps focused on knowledge of distance teaching, databases, and data collection, use of peripherals and interfaces, and using computers to promote problem solving. Teachers’ beliefs also influenced their use of tools (Czerniak et al. 1999). Teachers’ attitude toward behavior, subjective norm and perceived behavioral control determined behavioral intention, which in turn influenced action or behavior. While teachers had fairly positive attitudes toward educational technology, they felt that the support structures were insufficient, so they were only slightly likely to implement these tools in their own teaching. Most studies about teacher use of technology have targeted aspects of beliefs, attitudes and usage. What has been fundamentally absent from this scholarly work are studies that have tracked the conditions that impact selection and usage of technological tools in the science education context. In fact, there has been an absence of studies that have addressed how science teacher preparation, science discipline, pedagogical approaches related to technological use and PD influence science teacher selection and usage of technological tools. This study addressed this gap.

Method To determine which factors influence teachers’ selection and usage of technological tools, science teachers were asked to complete an online survey. After examining notable descriptive statistics, we tested an explanatory model of how demographics, school context, pedagogical approaches and PD influenced the likelihood of a teacher

J Sci Educ Technol (2014) 23:668–681

using a tool via a multilevel cross-classification-ordered logit analysis (Goldstein 1995). Recruitment The target participants for this study were high school science teachers who had used or currently used technologies for science teaching and learning. Participants were recruited via two formats. First, we obtained individual teacher email addresses from 24 high school websites. Second, emails were disseminated to the Science Teachers’ Association of New York State (STANYS) listserv. Over a period of 4 weeks, 153 science teachers responded to the questionnaire. Participants The target participants for this study were high school (grades 9–11) science teachers. One hundred and fifty-three teachers representing a northeastern region responded to an online questionnaire. This exploratory, convenience sample was not representative of the population of science teachers. The teachers averaged 15.4 years of teaching experience, and 90 % had a master’s degree. While 83 % of the teachers were tenured, 16 % were untenured (1 % did not respond). Participants’ age range: 51 ? (32 %); 41–50 (22 %); 30–40 (31 %) and 30 and under (12 %). Eightynine percent (89 %) taught in public schools while 11 % taught in private schools: 35 % were suburban, 32 % rural and 29 % urban (Table 1). A total of 16 % reported some middle school teaching experience. A total of 97 % (n = 148) of the teachers reported their content area; 16 % identified one content area while 84 % of teachers noted that they taught more than one (up to 4) content areas. For example, 39 % of teachers identified two content areas and 29 % of teachers taught three content areas. For specific content distribution, see Table 2.

671 Table 1 Demographics for science teacher participants Gender Female

96

Male

52

Blank

5

Highest level of education completed College

4

Teaching certificate

2

Masters

138

Doctorate

8

Blank

1

Age (years) 30 and under

19

31–40

47

41–50

33

51 and older Blank

49 5

Average years of teaching experience

15.4

Tenured Yes

126

No

24

Blank

3

Grade levels taught High school

140

Middle school

25

Type of school Public school

136

Charter school

1

Private school

11

Blank

5

School location Urban Suburban

45 54

Rural

49

Blank

5

Procedures Science teachers were invited to participate in this study via emails that were forwarded to individual emails and the STANYS listserv. Two email reminders were forwarded to the above listserv. More than 75 % of the responses were from teachers who subscribed to the above listserv. The questionnaires were completed online, and the data were organized in an online database. The data were then imported into an excel file, organized according to the items and coded to reflect the variables identified below. Prior to dissemination, the questionnaire was piloted with a sample of ten participants that included pre/in-service teachers and doctoral students. More specific details are provided in the section that outlines the development of the instrument.

Table 2 Frequency of content discipline taught by science teacher participants Courses taught

General or-regents level

Advanced placement

Biology

70

20

Chemistry

61

14

Physics Earth science

47 41

10 4

Environmental science

19

7

Forensics

13

General science

24

123

672

J Sci Educ Technol (2014) 23:668–681

The Instrument

Variables

Informed by Basalla’s (1996) theoretical framework and gaps in the science education literature (e.g., Odom et al. 2002), design of the questionnaire followed Scheuren’s (2004) five stages and included both open-ended and closed items. In the first stage, the goal for data collection was identified. Here, we wanted to examine factors that influenced science teachers’ patterns of technology selection and usage in their subject matter and pedagogical contexts. In the second stage, explanatory variables were identified: demographics (e.g., years of teaching experience), teaching practices, perceptions of students (e.g., learning abilities of students) and PD. For the third stage, items aligned with the objectives were developed. For example, items elicited general classroom technology use and specific discipline use (e.g., biology, physics). In addition, these items probed the extent of use (e.g., # of uses per month), how teachers learned and/or were introduced to these tools and if PD was associated with tool selection and use. Teachers also responded to open-ended items, which further corroborated patterns of selection and usage. For example, teachers inputted their top three tools, and for each tool, they identified length of PD or other training and how PD was disseminated (see Part D of the questionnaire). Stage 4 focused on the mode of data collection and this involved constructing Likert’s scale items and follow up open-ended items to further probe selection and usage. For final dissemination, an online format was more appropriate. Stage 5 addressed optimizing variables. For example, for tool usage, five categories of responses were identified: never used, previously used, but stopped, used less than twice a month, used twice a month or more often and no access. The result was a questionnaire organized into four sections: Section A addressed demographic information, while B focused on general classroom usage and associated PD. Section C focused on usage specific to each science discipline (e.g., physics, chemistry), while D was comprised of open-ended items that elicited specific information on tool usage. Three iterations of the questionnaire were developed prior to final dissemination. In the first iteration, two science education researchers with STEM expertise developed initial questions guided by Basalla’s (1996) theoretical framework and identified gaps in the literature. After these revisions, the second iteration was administered to ten participants (e.g., in-service science teachers and science education doctoral students) in a doctoral seminar. At this stage, feedback on respondent’s understanding of constructs and readability of items was documented. For example, the question related to teacher skill with technology usage was inserted after this round of consultation. The third iteration was disseminated to science teacher participants.

The major categories of variables were organized to represent the four sections of the questionnaire. Teacher demographics were coded to represent teacher background information. For example, gender was coded as follows 0: female: 1: male. Questions related to educational level that featured multiple options were coded as follows: 0: college, 1: teaching certificate, 2: masters, 3: doctorate. For Likert’s scale items, each statement was coded to reflect the following scale: 1: strongly disagree, 2: disagree, 3: neutral, 4: agree, 5: strongly agree. See examples of the coding scheme in ‘‘Appendix 2.’’ Similarly, for tool usage, five categories of responses were identified: never used, previously used, but stopped, used less than twice a month, used twice a month or more often and no access. For example, participants were asked about their use of a SMARTboard in the classroom. A distinct variable was created for each of these options associated with SMARTboards such as SMARTboards never used; SMARTboards previously used; but stopped; and so on. Each of these variables was then coded as 0: no or 1: yes. This pattern of coding was also applied to the questions associated with tool introduction and PD. For section D, one example of an open-ended question: Participants manually inputted their top three tools and identified the nature of related PD. Tools were first organized to represent two broad classes: hardware and software. Next, tools were classified based on the types of hardware tools; namely laboratory, instructional and computer-related tools. Software categories were Internet websites, models and simulations, and general instructional software. In addition to the prescribed list, teachers also inputted additional tool choices, which were categorized based on the following coding scheme: 0 left blank, 1 software (internet websites), 2 software (data analysis tools), 3 software (modeling), 4 hardware (learning tools), 5 hardware (instructional tools), 6 hardware (computers). Note that each statement related to PD was coded as 0: no and 1: yes.

123

Rationale for Explanatory Model To address the second research question ‘‘What factors influence how science teachers select and use technological tools in schools?’’ with the above dataset, we modeled (1) missing data, (2) differences across teachers and across tools, (3) multilevel, mediation (indirect) effects, (4) many hypotheses (without false positives), (5) infrequent outcomes and (6) robustness procedures (see Table 3). First, missing questionnaire response data (2 %) can reduce estimation efficiency, complicate data analyses and bias results (Graham 2009). Markov Chain Monte Carlo

J Sci Educ Technol (2014) 23:668–681 Table 3 Statistics strategies to address each analytic difficulty Analytic difficulty

Statistics strategy

Dataset Missing data

Markov Chain Monte Carlo multiple imputation (Peugh and Enders 2004)

673

(e.g., halves) will be run separately to test the consistency of the results for each subset. Also, the analyses will be repeated for the original dataset without estimation of missing data. Explanatory Model

Outcome variables Differences across teachers and across tools

Multilevel analysis (aka hierarchical linear modeling, Bryk and Raudenbush 1992; Goldstein 1995)

Infrequent outcomes

Logit bias estimator (King and Zeng 2001)

Explanatory variables Indirect, multi-level mediation effects

Multilevel M-tests (MacKinnon et al. 2004)

False positives

Two-stage linear step-up procedure (Benjamini et al. 2006)

Robustness procedures

Single outcome, multilevel models for each outcome variable Testing on subsets of the data Testing on unimputed data

multiple imputation (MCMC-MI) estimates the values of the missing data, which addresses these missing data issues more effectively than deletion, mean substitution or simple imputation according to computer simulations (Peugh and Enders 2004). We also included auxiliary (control) variables such as demographics to reduce the impact of missing data (Graham 2009). Second, uses of two tools by the same teacher is more likely to be similar than uses of two tools by two different teachers. Thus, an ordinary least squares regression underestimates the standard errors, and a multilevel crossclassification analysis is needed (Goldstein 1995). Third, to properly model indirect effects, the multilevel M-test corrects for the potentially non-normal distributions and determines the significance of a confidence interval based on a critical z ratio determined across multiple data simulations (MacKinnon et al. 2004). This test reduces false positives (MacKinnon et al. 2004), and it has more power than other approaches to detect smaller mediation effects (Pituch et al. 2006). Fourth, testing several hypotheses increases the likelihood of a false positive. To control for the false discovery rate (FDR), we used the two-stage linear step-up procedure, which outperformed 13 other methods in computer simulations (Benjamini et al. 2006). Fifth, logistic regression is biased if the outcomes are infrequent (e.g., use of a tool at least twice a month is \20 %). To remove this bias, we use King and Zeng’s (2001) logit correction to estimate the bias and remove it. Sixth, to test the robustness of the results, two variations of the core model will be used. Subsets of the data

After MCMC-MI estimation of the missing data, we modeled teachers’ use of tools with a multilevel, cross-classification-ordered logit regression model (Goldstein 1995). We entered the variables according to time constraints, expected causal relationships, and likely importance. ðsÞ

cij ¼ PðTool Useij  sÞ ¼ 1= f1 þ exp½ðþf 0j Þg þ eij ¼ Fðþf 0j Þ þ eij

ð1Þ

ðsÞ

b00 are the thresholds of the ordered values s (=0, 1, 2 [not using, less than twice a month, at least twice a month)] of Tool_Useij, the use of each technical tool i by each teacher j. The unexplained tool-level and teacher-level components (residuals) are eij and f0j, respectively. First, we entered a vector of u teachers’ demographic variables: gender, age, highest educational level completed and years of teaching experience (Demographics0j). ðsÞ

ðsÞ

cij ¼ Fðb00 þ f 0j þ b0u Demographics0j þ b0v School0j þ b0w Student0j þ bxj ProfDevij þ b0z Criteria0j Þ þ eij ðð2ÞÞ 2

A nested hypothesis test (v log likelihood) indicated whether each set of explanatory variables was significant (Kennedy 2008). Non-significant variables were removed. Then, we entered school characteristics: type of school (public, private, chartered), location of school (urban, suburban, rural), level of school (high school only, middle school and high school), subjects taught (general science, earth science, applied earth science, physics, applied physics, biology, applied biology, chemistry, applied chemistry, environmental science, applied environmental science, forensics) and tenured (School0j). Next, we entered student characteristics: high learning ability, interested in the subject matter, well behaved in class, work well in groups, participated in inquiry lessons, have experience using technology and have access to learning technology at home (Student0j). Next, we entered PD variables: days of training, type of PD and teachers’ rating of their skills at using technology helping students’ learning (ProfDevij). Lastly, we entered teachers’ criteria for using a tool: cost of the tool, step-bystep manual, technical skills, easy to use, steep learning curve, familiarity of the tool, past success using tool, preference for using the tool, technical support at school, aligns with curriculum, students are adept with technology,

123

674

nature of students’ social experiences, student are interested in the tool and provide access to students (Criteria0j). We used an alpha level of .05. To control for the FDR, we used the two-stage linear step-up procedure (Benjamini et al. 2006). The marginal effects of each variable’s total effect (E, direct plus indirect) were reported as the increase or decrease (?E % or -E %) in the outcome variable. With 153 teachers, statistical power was 0.99 for an effect size of 0.3 (Konstantopoulos 2008; see ‘‘Appendix 1’’ for details). We analyzed residuals for influential outliers (See also ‘‘Appendix 3’’).

J Sci Educ Technol (2014) 23:668–681

probeware, 44 % identified the vacuum chamber and 40 % identified the spectrometer. A total of 20 % of teachers noted that \1 day of PD was received for use of the probeware and spectrometer. In contrast, 86, 90 and 83 % of teachers indicated that no PD was offered for the above tools, respectively. Sixty percent, 48 % and 48 % of earth science teachers identified that the following tools were used less than twice a month: psychrometer, anemometer and barometer, respectively. These teachers also identified that they were exposed to the latter tools at the school level. For each of the above tools, 90, 98 and 93 % of teachers, respectively, noted that no PD was offered.

Results Part II: Selection and Usage of Technological Tools The results were organized to address the research questions: (a) profiles of technological tools selected and used in the science classroom, and (b) factors that predict science teachers’ selection and usage of technological tools in science classrooms (explanatory model). Profiles of technological tools selected and used in science classrooms The results for this section were organized to first represent profiles of tools, usage and nature of PD for section B that featured a guided list of tools (part I). Next, results were presented for section D where teachers inputted their top three tools (part II). Part I: Selection and Usage of Technological Tools For general classroom usage, science teachers selected the following top three tools that were used twice a month or more: desktop (80 %), laptop (62 %) and SMARTboards (57 %). Tools that were used less than twice a month included the DVD player (52 %), digital camera (43 %), VCR player (31 %) and probeware (25 %). In comparison, teachers indicated that microfiche (77 %), cassette players (74 %) and film projectors (71 %) were never used in the classroom. In probing specific content, physics teachers indicated that multimeters (33 %), lasers (31 %) and AC/DC power supply (29 %) were used more often in their practice (more than twice a month). Teachers indicated that knowledge associated with these tools was acquired at the school level. For 53 % of biology teachers, the microscope was the most commonly used tool, and 54 % of teachers also noted that exposure occurred at the school level. In general, PD for this discipline was rather dismal and while 8 % of teachers identified less than a day of PD for the digital microscope, only 4 and 6 % identified 1–2 and 3–5 days of PD, respectively. A total of 80 % of teachers identified 0 days of PD. Chemistry teachers reported less than twice a month of technology use: 47 % of chemistry teachers identified

123

Science teachers also responded to open-ended items where they inputted their top 3 technological tools and associated PD. Note that this provided an opportunity for teachers to identify whether other kinds of technological tools were being used in the science classroom. Teachers identified the following major categories of classroom technologies: three hardware categories were laboratory, instructional and computer-related tools, and two software categories were Internet websites, and models and simulations, and general instructional software. Teachers often used hardware tools (88 %), but were far less likely to use software tools (12 %). Of these hardware tools, the dominant categories for all disciplines included instructional and laboratory tools. For example, SMARTboards, clickers and probeware were the most commonly identified tools across all teachers for all science disciplines. Tree diagrams were created to organize tool classification for each discipline. Figure 1 represented the range of tools identified by biology teachers. Biology teachers were most likely to use instructional tools, followed by laboratory tools and computers, respectively. SMARTboards and clickers were the most common instructional tools while probeware (e.g., digital pH tester) was the most common laboratory tool. Chemistry, physics and earth science teachers shared a similar pattern when compared to biology teachers. The exception for physics teachers was identification of photogate timers and digital multimeters (Fig. 2). In sum, except for the use of the probeware, there was limited use of inquiry-based technologies that promoted student-centered, dynamic learning scenarios (e.g., models and simulations). Furthermore, based on the overall content profiles, science teachers identified a limited set of tools. Professional Development Associated with Selection and Usage For PD associated with the above tools, 27, 17, 12 and 8 % of teachers indicated that introduction to the SMARTboard

J Sci Educ Technol (2014) 23:668–681

675

BIOLOGY

SOFTWARE HARDWARE

Instructional Tools

Laboratory Tools

Digital Microscope (4)

Gel Electrophoresis (4)

Probeware (9)

Lumens Lady Bug Digital Camera (1)

Smartboard (20)

Graphing Calculator (2)

Centrifuge (1)

Motic/ Document Camera (4)

MOBI Tablet (1)

Laptop (3)

Clickers (7)

Copier Test Scanner (1)

Tablets (1)

Desktop (1)

Google Earth (1)

Macintosh (1)

Digital Software Biology Programs (1)

Edmodo (2)

Web 2.0 (2)

Ipad (5)

General Instructional Software

Models & Simulations

Internet Websites

Computers

Airliner (1)

Quizdom (1)

PCR (1)

ARC GIS (1)

Fig. 1 Classification of the major groups of technological tools used in biology classrooms

PHYSICS

SOFTWARE

Internet Websites

Models & Simulations

HARDWARE

General Instructional Software

Video analysis software (1)

Laboratory Tools

Photogate Timer (4)

Digital Multimeter (3)

Laser (1)

Smartboards (5)

Probeware 6)

Arduino (1)

Computers

Instructional Tools

Van de Graff Generator (1)

Motic/ Document Camera (2)

Flip Video (1)

Tablets (1)

Clickers (9)

Graphing Calculators (2)

Ipad (1)

Fig. 2 Classification of the major groups of technological tools used in physics classrooms

involved 1–2 days; \1 day; 3–5 days; and 6 days or more, respectively. In contrast, 35 % of teachers indicated that they received no PD. For clickers, 26 % of teachers indicated that \1 day was the most common length of time allocated to PD; 63 % of teachers noted that they received 0 days of PD for this tool. Finally, teachers indicated that they received 1–2 days (12 %) and less than 1 day (12 %) of PD for probeware. However, 67 % of teachers indicated that they received no PD for probeware. For follow-up tool usage that involved teachers’ inputted information, the bulk of PD for instructional and laboratory tools involved less than 1 day or 1–2 days (Table 4). For example, 78 % of teachers indicated that PD for instructional tools involved less than 1 day (47 %), up to 2 days (31 %). A total of 64 % of teachers reported a similar pattern for laboratory tools. Teachers indicated that the main focus of PD involved general procedures on how to use the tools and procedural approaches associated with specific lessons. A total of 93 % of teachers identified that instructional tools and 80 % of teachers identified that laboratory tools involved procedural approaches to tool use. Meanwhile, for laboratory tools, 64 % focused on specific content lessons,

when compared to 60 % of instructional tools that did not include content alignment. Regardless of limited technology use and limited opportunities for meaningful PD experiences, 24 % of teachers reported mid range skills, 22 % indicated high skill levels and 7 % indicated very high skill level when compared to 3 % of teachers who identified very low skill levels. A total of 44 % of teachers did not identify a skill level.

Explanatory Model The results show that most of the differences in tool use were due to differences between tools (85 %) rather than between teachers (15 %). All results discussed below describe first entry into the regression, controlling for all previously included variables. Demographics Teachers who finished their master degrees were 4 % more likely to use a technical tool at least twice a month

123

676

J Sci Educ Technol (2014) 23:668–681

Table 4 Professional development per tool category Amount of professional development provided per tool category Number of tools

\1 day

1–2 days

3–5 days

6–10 days

More than 10 days

Blank

Internet websites

9

22 % (2)

11 % (1)

22 % (2)

11 % (1)

22 % (2)

11 % (1)

General instructional software Models and simulations

4 3

25 % (1) 0 % (0)

25 % (1) 33 % (1)

0 % (0) 33 % (1)

0 % (0) 0 % (0)

0 % (0) 33 % (1)

50 % (2) 0 % (0)

Laboratory tools

44

32 % (14)

32 % (14)

11 % (5)

2 % (1)

7 % (3)

16 % (7)

Instructional tools

55

47 % (26)

31 % (17)

9 % (5)

9 % (5)

0 % (0)

4 % (2)

Computers

15

20 % (3)

40 % (6)

20 % (3)

7 % (1)

0 % (0)

13 % (2)

Tool category Software

Hardware

compared to other teachers (Table 5, model 1). A total of 90 % of the teachers in this sample had master’s degrees. Education accounted for 4 % of the difference of the use of technical tools across teachers.

Table 5 Regression models Explanatory variable

Education level: master

School Characteristics

Regressions predicting tool use Model 1

Model 2

Model 3

Model 4

0.317*

0.284

0.249

0.200

(0.151)

(0.150)

(0.144)

(0.145)

0.184*

0.210*

0.216*

(0.091)

(0.087)

(0.088)

0.163***

0.160***

(0.040)

(0.041)

Subject taught: physics

Teachers of physics were 3 % more likely to use a technical tool at least twice a month compared to other teachers (Table 5, model 2). A total of 37 % of the teachers in this sample were physics teachers. After controlling for teaching of physics, educational degree was no longer significant. Subject content accounted for an extra 3 % in the difference of the use of technical tools across teachers.

Students have participated in inquiry lessons

Days of professional development on tool (baseline: none) \1 day

1.110*** (0.103)

1–2 days

1.939*** (0.142)

Student Characteristics 3–5 days

If a teacher’s students participated in inquiry lessons, the teacher were ?2 % more likely to use a tool at least twice a month, compared to other teachers (Table 5, model 3). A total of 48 % of teachers indicated that students had participated in inquiry lessons and 17 % felt strongly that these were quality experiences for their students. Student characteristics accounted for an extra 15 % of difference in the use of technical tools across teachers. Professional Development

2.548*** (0.244)

6–10 days

3.231*** (0.365)

More than 10 days

17.460*** (0.378)

Variance at each level Person (15 %)

0.043

0.078

0.190

0.233

Tool (85 %)

0.000

0.000

0.000

0.000

Total variance explained

0.006

0.012

0.029

0.035

Each regression model included two ordered thresholds

If a teacher participated in more days of PD training with a tool, they were more likely to use it than teachers who received no PD with it: (a) \1 day, ?18 %; (b) 1–2 days, ?38 %; (c) 3–5 days, ?53 %; (d) 6–10 days, ?66 %; and (e) over 10 days, ?87 % (Table 5, model 4). The amount of PD that teachers received for a tool was (a) none (88 % of the time), (b) less than a day (7 %), (b) 1–2 days (3 %) (c) 3–5 days (1 %); (d) 6–10 days, (1 %); and (e) over

123

* p \ .05, ** p \ .01, *** p \ .001

10 days (0.03 %). PD differences accounted for an extra 4 % of difference in the use of technical tools across teachers. Other explanatory variables were not significant. Notably, there were no gender difference and no age differences. Given the small sample size, our non-significant

J Sci Educ Technol (2014) 23:668–681

results are vulnerable to low statistical power, but our significant effects are not. There were no significant mediation effects and no significant interaction effects. The robustness tests showed similar results.

677

teachers suggested that the value systems of physics teachers might differ from those of other science teachers. Rather than lumping all science teachers in one group, future research should examine specific content and technology needs of teachers from different science disciplines.

Discussion Patterns of Usage This study contributed to our understanding of those factors that serve as predictors of science teachers’ selection and use of technologies and more specifically, how selection and usage was realized among teachers of different science disciplines. Greenberg et al. (1998) emphasized that ‘‘to account for factors that encourage or discourage teachers from using technology within district wide, school wide and classroom frameworks, one must understand… what interests some teachers in using technology and others in not using it…’’ (p. 298). For this study, the findings revealed that science teachers were more likely to use hardware than software; specifically, this included instructional (i.e., SMARTboard, clickers) and laboratory tools (probeware). Differences in teachers’ use of tools were largely due to differences in tools as opposed to differences in teacher characteristics. Teachers who taught physics, who taught via inquiry, or who had more PD with a tool, were more likely to use a technological tool. These findings have implications for how we conceptualize selection and usage of technologies that enter the science education pipeline; which tools become sustainable in the science classroom and how technological take-up differs across science disciplines. Physics Teachers more Likely to Use Technologies… Physics teachers were more likely to use technologies in their science classrooms when compared to teachers from other science disciplines. This finding was rather interesting because it did not coincide with the higher volume of literature on chemistry and biology technological innovations in science classrooms (e.g., Adadan et al. 2010; Passmore and Stewart 2002). In addition, studies that have explored factors that influence usage did not report patterns that were exclusive to specific disciplines (e.g., Greenberg et al. 1998). For example, Greenberg et al. reported that earth science, biology and AP physics teachers had high use of image processing which was attributed to teachers’ prior experiences with actual scientific research (e.g., NASA’s Jet Propulsion Lab). These experiences resulted in proficient uses of computers. As teachers select tools for their context based on their value systems (Basalla 1996), the significant differences in tool use between physics teachers and other science

This study uncovered patterns of usage that included SMARTboards, clickers and probeware.3 In this respect, usage reflected a mix of both fad and fashion and more stable, continuous classroom tools. In contrast, few teachers used tools (e.g., computer-based models) intended to emphasize the multiple dimensions of science (Gallagher 2007). These patterns of use might stem from two potential explanations: (a) tool use reflected current teaching practice and/or (b) the sample of science teachers represented in this study hailed from ‘‘technologically starved’’ schools and/or districts. First, teachers might have opted for instructional and laboratory tools (rather than a diverse set of tools) that best reflected their teaching practice. This result supports the claim that teachers may prefer technologies that share evolutionary rather than distinct lineage (Basalla 1996). For example, we could argue that because the SMARTboard shared direct lineage with the chalkboard, it functioned as a familiar platform for teacher instruction. A familiar platform would suggest a low learning curve thus facilitating its selection and use. Similarly, we could argue that use of clickers supported emphasis on singular correct answers. In contrast, other technological mediums such as computer-based models may require a new set of skills and knowledge, which are not common in teachers’ repertoire of practice. Basalla’s discussion on the continuous nature provided an explanation. Essentially, if teachers have to start with raw knowledge for every new tool or new iteration of a tool, it is possible that teachers will not select and/or use these tools. We can argue that such was the case for computer-based models. However, even when the SMARTboard shared visible linkages with the chalkboard, we remained cognizant that it was 3

While our study did not elicit specific uses of the SMARTboard, clickers, and probeware technologies, other studies have reported on the potential advantages of these tools. For example, Bell, Maeng and Binns (2013) reported that teachers used the SMARTboardTM activities to stir student attention, engagement, and interactions and to explain their thinking and convey their understanding of the content (p. 366). While Lopez et al. (2013) advocated using clickers to counter poor metacognitive and peer learning strategies, MacArthur and Jones’s (2008) review of 56 studies conducted in collegelevel science classrooms yielded mixed results regarding the clicker’s pedagogical benefits. Lastly, Yerrick et al. (2011) showed how widely used probeware can engage students with science.

123

678

designed as an interactive platform intended to engage multifaceted and dynamic teaching and learning. Studies that examined its use reported challenges associated with passive use (e.g., to display power point presentation) [Tanner and Jones 2007); time required for lesson planning (Kennewell and Higgins 2007) and technical issues (Hall and Higgins 2005). While these challenges may suggest that SMARTboards function to reinforce didactic traditional approaches, it would be important to further investigate how this use was realized in the science classroom. Second, since the majority of teachers (89 %) identified that they taught in public schools, it is probable that teachers worked in schools and districts that were ‘‘technologically starved.’’ Compared to science teachers in private schools, those in public school might have less access to innovative technologies or perhaps these technologies were less promoted in public schools. Stylianidou et al. (2005) reported that high quality transformations of computer modeling were visible only with teachers from the better schools and thus admitted that this sample was not representative ‘‘of what happens on average’’ (p. 65). Based on our findings, we were led to question if our results are in fact ‘‘what happens on average.’’ Naturally, this assertion requires more investigation to understand if these were localized findings or if applicable to a broader setting. It is relevant that teachers in this study were voluntary participants and 46 % rated themselves as having medium or high technological skills. Technological skills were specifically related to teachers’ perceptions of their skill level using technology for teaching and learning in the science classroom. Teachers who were less technologically skilled were less likely to spend time and effort contemplating their weaker skills; they were less likely to participate in a technology study. The patterns of tool usage noted above also informed on the life cycle and potential sustainability of technological tools. Tools that reinforced practices aligned with a culture of testing and didactic modes of teaching (e.g., clickers and SMARTboard) appeared more likely to survive and have longer life cycles in the science classroom. For sure, survival in this context would also designate these tools as winners versus losers (Zhao et al. 2002). Understanding what factors influence teacher selection and usage and how these actions result in the sustainability of winners can assist our conceptualization of teacher preparation for new technologies. The findings of this study reaffirmed that the context of the science discipline, the nature of PD and opportunities for inquiry were among primary factors that informed how winners were realized. Finally, here it is important to highlight that there was little technological diversity among those tools commonly

123

J Sci Educ Technol (2014) 23:668–681

identified (see Figs. 1, 2) by science teachers from all disciplines. Basalla (1996) explicated that technological diversity is driven by beliefs that diverse tools are necessary in order to solve problems. However, based on our findings, we can surmise that teachers may not share in this belief and instead, may be more receptive to fewer tools that target specific needs. Patterns of Usage and Professional Development Teachers who received longer periods of PD for a tool were more likely to use it (Gerard et al. 2011); however, teachers reported that PD associated with tool usage was rare. Furthermore, PD typically emphasized procedural approaches to tool use (how to use it during specific lessons). For this study, 88 % of teachers reported that PD associated with technological implementation was rare. Work conducted by Lawless and Pellegrino (2007) and most recently by Gerard et al. (2011) documented that PD was one of the weakest links in helping teachers implement technologies effectively in the science classroom. In their review of the literature, Gerard et al. (2011) noted that PD efforts involving 1 year or less saw little success because teachers needed time to overcome technical challenges and transition their practice and students practice from traditional contexts to innovative approaches that encompass technological implementation. In contrast, PD programs implemented for longer than a year were more effective in helping teachers to transition their teaching approaches and better prepare students for optimal learning gains with technology. More importantly, longer periods of PD immersion showed evidence of teachers’ pedagogical enhancement and change and student learning gains. Indeed, physics teachers who were more likely to use technological tools also experienced longer periods of PD. In contrast, for other science disciplines, patterns of limited PD, which on average included 1–2 days or \1 day, reflected limited usage of tools. As a final point of discussion relevant to PD, the literature cautioned that length of PD was closely associated with quality of PD. In fact, Lotter et al. (2007) confirmed that changes in practice materialized only when PD aligned with the goals and expectations of teachers and if teachers were dissatisfied with their current practice. In part, limitations of PD have been attributed to narrow views of what teachers require in order to use technologies; what Zhao et al. (2002) identified as ‘‘technical skills and a good attitude’’ (p. 511). Thus, it is no wonder that for this study, emphasis on procedural approaches related to tool use and content integration, did not bode well for increase tool usage. Based on these findings, PD experienced by science

J Sci Educ Technol (2014) 23:668–681

teachers negatively influenced their take-up of a diverse range of tools. Inquiry and Patterns of Usage Teachers who indicated that their students engaged in inquiry were more likely to use technological tools in their practice. This finding suggested that inquiry-based approaches were more likely to involve use of technological tools. In addition, more than 50 % of teachers agreed that their students worked well in groups; exhibited high learning abilities; behaved well in class; and revealed interest in the content matter. Lotter et al. (2007) found that teachers’ conceptions of their students influenced the type and amount of inquiry enacted in the classroom. So, it is possible that teachers, who held positive views about their students, were more likely to enact inquiry and thus, were more likely to use technologies in their classroom. In support, Zhao et al. (2002) findings suggest that when technology use was aligned with pedagogical approaches, opportunities for success were enhanced. So, physics teachers who engaged in inquiry probably experienced more success and as a result were more likely to continue usage. While teachers’ conceptualization and enactment of inquiry was beyond the scope of this study, it would be important to further explore enactment among physics teachers.

679

Limitations This study had several limitations including a non-representative sample, insufficient tool characteristics that explain differences in teacher use of a tool and pedagogical approaches. Since this exploratory study used a convenience sample, note that sample is not representative of science teachers in the nation. Future studies can collect representative samples of science teachers. Furthermore, nearly all of the tool-level differences in teachers’ use of tools were unexplained, pointing to the need for relevant tool characteristics for this explanatory model. Appendix 1: Power Analysis of Sample Size Level

Effect size 0.1

0.2

0.3

0.4

2) Teacher

0.32

0.84

0.99

1.00

1) Tool

0.94

1.00

1.00

1.00

Appendix 2: Below We Illustrate The Coding Scheme that was Used for Identified Variables of the Study Teacher demographics variables

Implications for Practice and Research If the results of this exploratory study were replicated in larger studies, they would suggest that tool use differs mostly (a) by tool rather than by teacher, (b) by the specific subject matter of a science teacher, (c) by pedagogical style and (d) by length of PD with a tool. First, this study identified that teachers’ use of tools were due to differences in tools, but not differences in teacher characteristics. Hence, studies of how tool differences affect teachers’ use are needed in this field. For example, it would be important to understand the social, cultural and knowledge demands that different kinds of tools impose on science teaching and learning. Comparing and contrasting these differences could yield vital information for future technological implementation. Second, differences in tool use by science discipline suggest that more studies are needed to compare the nuances of implementation in physics, chemistry, biology and earth science classrooms. It would be important to examine how cultural values and expectations associated with different science disciplines influence the staying power of tools in the classroom. Third, this study affirmed the need for longer periods of PD associated with technology implementation in science classrooms. As a follow-up, it would also be important to document optimal length of PD and how PD is tailored to address content differences.

Gender

0: female 1: male

Years of teaching experience

Highest educational level

Manually inputted by participant 0: college 1: teaching certificate 2: masters 3: doctorate

Subjects taught

manually inputted by participant

Grade levels taught

manually inputted by participant

Tenured

0: no 1: yes

Type of school

0: public school 1: charter school 2: private school 3: alternative school

School location

0: urban 1: suburban 2: rural

Age

0: 30 years and under 1: 31– 40 years 2: 41–50 years 3: 51? years

123

680

J Sci Educ Technol (2014) 23:668–681

References

Teacher attitudes about their students coding key My students…

Coding key

a) have high learning ability

1: strongly disagree

b) are interested in the subject matter

2: disagree

c) are well behaved in class

3: neutral

d) work well in groups

4: agree

e) have participated in inquiry lessons

5: strongly agree

Adadan E, Trundle KC, Irving KE (2010) Exploring grade 11 students’ conceptual pathways of the particulate nature of matter in the context of multirepresentational instruction. J Res Sci Teach 47:1004–1035 Baram-Tsabari A, Yarden A (2005) Characterizing children’s spontaneous interests in science and technology. Int J Sci Educ 27:803–826 Basalla G (1996) The evolution of technology. Cambridge University Press, New York, NY Beare R (2007) Investigation into the potential of investigative projects involving powerful robotics telescopes to inspire interest in science. Int J Sci Educ 29:279–306 Bell RL, Trundle KC (2008) The use of a computer simulation to promote scientific conceptions of moon phases. J Res Sci Teach 45:346–372 Bell RL, Maeng JL, Binns IC (2013) Learning in context: technology integration in a teacher preparation program informed by situated learning theory. J Res Sci Teach 50:348–379 Benjamini Y, Krieger AM, Yekutieli D (2006) Adaptive linear stepup procedures that control the false discovery rate. Biometrika 93:491–507 Bryk AS, Raudenbush SW (1992) Hierarchical linear models. Sage, London Campbell T, Zuwallack R, Longhurst M, Shelton BE, Wolf PG (2014). An examination of the changes in science teaching orientations and technology-enhanced tools for student learning in the context of professional development. International Journal of Science Education. doi:10.1080/09500693.2013.879622 Czerniak CM, Lumpe AT, Haney JJ, Beck J (1999) Teachers’ beliefs about using educational technology in the science classroom. Int J Educ Technol 1:1–17 FitzPatrick KA, Finn KE, Campisi J (2011) Effect of personal response systems on student perception and academic performance in courses in a health science curriculum. Adv Physiol Educ 35:280–289 Gallagher JJ (2007) Teaching science for understanding: a practical guide for middle and high school teachers. Merrill, Upper Saddle River, NJ Gerard LF, Varma K, Corliss SB, Linn MC (2011) Professional development for technology-enhanced inquiry science. Rev Educ Res 81:408–448

f) have experience using technology g) have access to learning technology at home

Teachers top three tools per professional development Tool category

Coding key

Left blank

0

Software—internet websites/software such as word, PowerPoint

1

Software—data analysis tools

2

Software—modeling/simulation

3

Hardware—learning tools (student use)

4

Hardware—instructional tools (teacher use)

5

Hardware—computers/tablets

6

Appendix 3: Correlation–Variance–Covariance Matrix Correlation–variance–covariance matrix of outcome variables and explanatory variables for word-level analysis. The correlations, variances and co-variances are along the lower left triangle, diagonal and upper right triangle of the matrix.

Variable

1

2

3

4

5

6

7

8

9

1 Technological tool use by a teacher

0.596

0.008

0.014

0.057

0.021

0.024

0.012

0.009

0.000

2 Educational level: Masters

0.038

0.083

0.015

0.013

-0.001

0.000

0.000

0.001

0.000

3 Subject taught: physics

0.038

0.111

0.218

-0.042

0.001

0.000

0.001

0.000

0.000

4 Students have participated in inquiry lessons

0.072

0.045

-0.089

1.035

0.003

-0.002

0.001

0.003

0.000

5 Days of training: \1

0.112

-0.007

0.007

0.011

0.062

-0.002

-0.001

-0.001

0.000

6 Days of training: 1–2

0.171

0.006

-0.001

-0.012

-0.049

0.032

0.000

0.000

0.000

7 Days of training: 3–5

0.142

0.015

0.019

0.005

-0.030

-0.021

0.013

0.000

0.000

8 Days of training: 6–10

0.131

0.028

-0.011

0.030

-0.024

-0.016

-0.010

0.008

0.000

9 Days of training: more than 10

0.035

0.006

0.007

-0.002

-0.005

-0.003

-0.002

-0.002

0.000

Bold values indicate the variances

123

J Sci Educ Technol (2014) 23:668–681 Goldstein H (1995) Multilevel statistical models. Edward Arnold, Sydney Graham JW (2009) Missing data analysis. Annu Rev Psychol 60: 549–576 Greenberg R, Raphael J, Keller JL, Tobias S (1998) Teaching high school science using image processing: a case study of implementation of computer technology. J Res Sci Teach 35: 297–327 Hall I, Higgins S (2005) Primary school students’ perceptions of interactive whiteboards. J Comput Assist Learn 21:102–117 Kennedy P (2008) A guide to econometrics. Blackwell, Cambridge Kennewell S, Higgins S (2007) Introduction: special edition on interactive whiteboards. Learn Media Technol 32:207–212 King G, Zeng L (2001) Logistic regression in rare events data. Polit Anal 9:137–163 Konstantopoulos S (2008) The power of the test in three-level cluster randomized designs. J Res Educ Eff 1:66–88 Lawless KA, Pellegrino JW (2007) Professional development in integrating technology into teaching and learning: knowns, unknowns, and ways to pursue better questions and answers. Rev Educ Res 77:575–614 Little RJA, Rubin DB (2002) Statistical analysis with missing data. Wiley, New York Lopez EJ, Nandagopal K, Shavelson RJ, Szu E, Penn J (2013) Selfregulated learning study strategies and academic performance in undergraduate organic chemistry: an investigation examining ethnically diverse students. J Res Sci Teach 50:660–676 Lotter C, Harwood WS, Bonner J (2007) The influence of core teaching conceptions on teachers’ use of inquiry teaching practices. J Res Sci Teach 44:1318–1347 MacArthur JR, Jones LJ (2008) A review of literature reports of clickers applicable to college chemistry classrooms. Chem Educ Res Pract 9:187–195 MacKinnon DP, Lockwood CM, Williams J (2004) Confidence limits for the indirect effect. Multivar Behav Res 39:99–128 Mistler-Jackson M, Songer NB (2000) Student motivation and internet technology: are students empowered to learn science? J Res Sci Teach 37:459–479 National Research Council (NRC) (2012) A framework for K-12 science education: practices, crosscutting concepts, and core ideas. National Academy Press, Washington, DC Ng W, Gunstone R (2003) Science and computer-based technologies: attitudes of secondary science teachers. Res Sci Technol Educ 21:243–264 Odom AL, Settlage J, Pedersen JE (2002) Technology knowledge and use: a survey of science educators. J Sci Educ Technol 11: 391–398

681 Passmore C, Stewart J (2002) A modeling approach to teaching evolutionary biology in high schools. J Res Sci Teach 39: 185–204 Pedersen JE, Yerrick RK (2000) Technology in science teacher education: survey of current uses and desired knowledge among science educators. J Sci Teach Educ 11:131–153 Peugh JL, Enders CK (2004) Missing data in educational research. Rev Educ Res 74:525–556 Pituch KA, Stapleton LM, Kang JY (2006) A comparison of single sample and bootstrap methods to assess mediation in cluster randomized trials. Multivar Behav Res 41:367–400 Rasbash J, Woodhouse G (1995) MLn command reference. Multilevel Models Project, Institute of Education, London Roth WM, Woszczyna C, Smith G (1996) Affordances and constraints of computers in science education. J Res Sci Teach 33:995–1017 Scheffer J (2002) Dealing with missing data. ResLett Inf Math Sci 3:153–160 Scheuren F (2004) What is a survey? Booklet Slavin R (2005) Educational psychology. Allyn and Bacon, New York Sorensen P, Twidle J, Childs A, Godwin J (2007) The use of internet in science teaching: a longitudinal study of developments in use by student–teachers in England. Int J Sci Educ 29:1605–1627 Stylianidou F, Boohan R, Ogborn J (2005) Science teachers’ transformations of the use of computer modeling in the classroom: using research to inform training. Sci Educ 89:56–70 Tabachnick BG, Fidell LS (2006) Using multivariate statistics. Allyn and Bacon, Boston Tanner H, Jones S (2007) How interactive is your whiteboard? Math Teach Inc Micromath 200:37–41 Wang Y (2002) From teacher-centredness to student-centredness: are preservice teachers making the conceptual shift when teaching in information age classrooms? Educ Media Int 39:257–265 Yerrick R, Hoving T (1999) Obstacles confronting technology initiatives as seen through the experience of science teachers: a comparative study of science teachers beliefs, planning, and practice. J Sci Educ Technol 8:291–307 Yerrick R, Schiller J, Reisfeld J (2011) Who are you callin expert?: using student narratives to redefine expertise and advocacy lower track science. J Res Sci Teach 48:13–36 Zacharia Z (2003) Beliefs, attitudes, and intentions of science teachers regarding the educational use of computer simulations and inquiry-based experiments in physics. J Res Sci Teach 40:792–823 Zhao Y, Pugh K, Sheldon S, Byers JL (2002) Conditions for classroom technology innovations. Teach Coll Rec 104:482–515

123