ASCB logo LSE Logo

Developing the DELTA: Capturing Cultural Changes in Undergraduate Departments

    Published Online:https://doi.org/10.1187/cbe.19-09-0180

    Abstract

    Departments are now recognized as an important locus for sustainable change on university campuses. Making sustainable changes typically requires a shift in culture, but culture is complex and difficult to measure. For this reason, cultural changes are often studied using qualitative methods that provide rich, detailed data. However, this imposes barriers to measuring culture and studying change at scale (i.e., across many departments). To address this issue, we introduce the Departmental Education and Leadership Transformation Assessment (DELTA), a new survey aimed at capturing cultural changes in undergraduate departments. We describe the survey’s development and validation and provide suggestions for its utility for researchers and practitioners.

    INTRODUCTION

    Education is constantly changing, in response to the world evolving around us and as new discoveries are made about learning. Changes made in higher education are often intentionally planned, but the efforts to adopt new methods and to “scale up” educational innovations are difficult to sustain (Henderson et al., 2012; Reinholz et al., 2019). New approaches are needed to support lasting, positive transformations.

    Growing evidence indicates that cultural change is a key component of creating broad, systemic changes (Schein, 2010; Fry, 2014; Reinholz and Apkarian, 2018). Because they have fairly consistent policies, norms, disciplinary identities, ways of interacting with students (e.g., course structures), and ways of communicating with other institutional departments and offices, departments are relatively coherent units of culture (Lee, 2007; Lee et al., 2007). For this reason, departments are now seen as key units of change in universities (e.g., American Association for the Advancement of Science [AAAS], 2011; Fry, 2014). Indeed, there are now many examples of departmentally focused change initiatives in higher education, including the Science Education Initiative (Chasteen et al., 2016), Vision and Change (AAAS, 2011), Student Engagement in Mathematics through an Institutional Network for Active Learning (Association of Public and Land-Grant Universities, 2016), and departmental action teams (DATs; Reinholz et al., 2017).

    Yet, even if one is successful in effecting change, how are such changes measured? In-depth interviews and ethnographic observations are effective ways to paint a rich picture of the local context. However, such methods are difficult to implement at scale. For this reason, there is a strong need for instruments that can provide quantitative information about cultural change at scale. We align our approach with the DAT model, which is grounded in a set of core principles for educational change (Quan et al., 2019).

    Our paper begins with an overview of culture and change. Next, we introduce the DAT project and the theoretical basis for the core principles that inform our perspective of culture. Given this background, we describe the development of our survey, which we have named the Departmental Education and Leadership Transformation Assessment (DELTA) survey. We outline the data and analysis that informed the validation of the DELTA survey responses, and finally, we close with a discussion of how the DELTA survey may be used for both research and practice.

    BACKGROUND

    Culture and the Context for Studying Change

    The sociocultural turn in education (e.g., Lave, 1996; Lerman, 2000) has drawn attention to the importance of context when designing educational reform. For instance, research on faculty learning and professional development often focuses on understanding faculty as communities of practice, embedded in particular contexts (Lave and Wenger, 1998; Cox, 2004). Similarly, grounded cognition (Barsalou, 2008), action research (Zeichner and Noffke, 2001), and design-based research (Cobb et al., 2003) all emphasize the importance of understanding how teaching and learning operate within given contexts. For complex organizations such as institutions of higher education, context cannot be “averaged out” across a number of cases; instead, it is necessary to have a deep understanding for enacting change within a given context (Lewis, 2015). In other words, context is not to be discarded, but needs to be considered with great care.

    To understand the impact of different contexts, we leverage the construct of culture. In the field of organizational change, culture is often defined as a constantly shifting set of beliefs, values, customs, rituals, practices, and structures used within a group and transmitted over time (Schein, 2010). This perspective emphasizes that aspects of culture are individually and socially negotiated.

    Culture manifests at three hierarchical levels: 1) artifacts, 2) espoused beliefs and values, and 3) basic underlying assumptions (Schein, 2010). Artifacts are the most outward sign of culture, consisting of the things one sees and hears in interacting with a group. While artifacts are easily identifiable, their purpose or value may be unclear to an outside observer (e.g., one sees a meeting organized a certain way, but is not sure why it is organized that way). Espoused beliefs are ideas that group members may suggest to influence their behavior. For instance, an educator at a university may suggest to colleagues that they should increase the use of active learning in their classrooms. In this community, the educator may believe that active learning adds value to a classroom, but that belief is not necessarily shared.

    Commonly espoused beliefs may become endorsed by the group over time and become basic underlying assumptions. These are ideas that are simply taken for granted across a group. In the culture of mathematicians, the importance of proof and precision is so well accepted that there is no longer debate over it. Moreover, these assumptions guide decision making (e.g., around curricular issues) without being stated explicitly and can lead to visible artifacts such as student learning outcomes that include evidence of students’ abilities to carry out proofs. It is important to note that, while artifacts are easy to see, their meaning is opaque without an understanding of underlying thinking. Because underlying assumptions are often implicit, they will not be easily captured in a survey. Thus, the purpose of our tool is to focus at the level of espoused beliefs and values of department members, recognizing that this provides an important, albeit incomplete, picture of departmental culture.

    Guiding Principles for Culture

    The previous discussion highlights the need for cultural change. Yet the question remains, what type of culture should we strive for? Here we draw from a set of core principles for cultural change that underlie the DAT project and guide DAT work (Corbo et al., 2016; Reinholz et al., 2017). These principles were developed from a synthesis of literature in organizational change and higher education (e.g., Senge, 2006; Cooperrider et al., 2008; Fry, 2014) and describe aspects of a positive, collaborative culture for sustainable educational improvement. The principles are further elaborated elsewhere (Quan et al., 2019), but for now, we state them briefly:

    • P1. Students are partners in the educational process.

    • P2. Work focuses on achieving collective positive outcomes.

    • P3. Data collection, analysis, and interpretation drive decision making.

    • P4. Collaboration between group members is enjoyable, productive, and rewarding.

    • P5. Continuous improvement is an upheld practice.

    • P6. Work is grounded in a commitment to equity, inclusion, and social justice.

    As Quan and other researchers have argued (Quan et al., 2019), these principles are not mutually exclusive but are overlapping and reinforcing. For instance, the inclusion of students as partners (P1) will necessarily contribute to overall inclusion (P6) by bringing more voices to the table when it comes to departmental change. Similar linkages exist between other principles. We briefly describe the theory underlying each of these principles.

    Students Are Partners in the Educational Process.

    The educational research literature is unequivocal in that good education must be grounded in the students themselves (e.g., Moll et al., 1992; Smith et al., 1994). As key stakeholders in the educational process and part of a population that is constantly changing and evolving, students bring perspectives that staff and faculty cannot. When students are engaged as partners in the educational process, their voices are valued by others (e.g., faculty) and they share power in the department, which results in students having a role in decision making. When department members believe that students should be treated as partners, this belief can manifest in behaviors such as getting to know students (e.g., Moll et al., 1992); building on student thinking (e.g., Smith et al., 1994); and creating a trusting, positive community (e.g., Yackel and Cobb, 1996; Boaler and Greeno, 2000). In the same way, a change effort aiming to serve students must be deeply grounded in students’ needs and must actively involve them in the change process.

    Work Focuses on Achieving Collective Positive Outcomes.

    When discussions focus on problems, individuals tend to default to their individual preferred solutions, which often results in disagreement. Further, a “problems focus” tends to address issues that are at the surface level rather than digging deeper to identify the root cause (Quan et al., 2019). In contrast, a focus on positive outcomes promotes flexibility, because any given outcome can be achieved in many ways (e.g., Cooperrider et al., 2008; Elrod and Kezar, 2015). Evidence that a focus on outcomes is valued may be observed in how groups approach making change. Groups that have an outcomes focus will often ground their desired outcomes in a shared vision, cocreated by stakeholders, which states a group’s goals and guides change efforts.

    Data Collection, Analysis, and Interpretation Drive Decision Making.

    The use of systemic evidence supports better decision making (Fry, 2014). It allows for change efforts to track progress, which supports ongoing reflection on practice (Henderson et al., 2011). Moreover, systemic evidence provides a basis for decision making beyond personal anecdote. This is key, because individuals rely on heuristics when making decisions (Kahneman, 2011). When groups believe that data contribute positively to decision making, they are more likely to collect and analyze data from multiple sources (e.g., institutional research, education literature) and in multiple forms (e.g., survey responses, interview data) and to consider multiple interpretations. The use of varied data and appropriate interpretation can ameliorate biases.

    Collaboration between Group Members Is Enjoyable, Productive, and Rewarding.

    Enacting and sustaining change is a complex process that cannot be carried out by individuals or small isolated groups (Fairweather, 2008; Quan et al., 2019). Sustainable improvement is a collective enterprise; thus, building a comfortable, trusting environment is fundamental to the productive functioning of the department (Schein, 2010). Additionally, it is important to afford agency to the participants in a department to achieve outcomes that they care about personally (Deci and Ryan, 2000; Bandura, 2006). Authentic collaboration between department members is not automatic, however, and is established through activities that foster community building (Quan et al., 2019). Inherent to this principle is the belief that change is not something to be done to others but something that participants drive. This belief can manifest in many behaviors, such as putting processes into place during meetings to ensure equity for decision making, engaging in reflective practices about how the team is functioning, and participating in activities as simple as sharing exciting personal news.

    Continuous Improvement Is an Upheld Practice.

    Educational problems rarely “stay solved”; instead, they need continuous attention to prevent backsliding (Bryk et al., 2011; Fry, 2014). Thus, the “solution” to a complex problem needs to be thought of as an ongoing process, not a onetime event. A department that wishes to sustain the changes it implements needs to create mechanisms for continuous improvement, as “iterations” lead to better solutions (Gharajedaghi, 2006; Quan et al., 2019). Additionally, the concept of “early wins” relates to the political nature of a change process (Kotter, 1996). The change effort must intentionally plan ways to demonstrate progress and success along the way or identify “early wins” en route to the larger goal. Structural changes that allow for new ways of operating mean the department can keep working toward a goal and improve indefinitely.

    Work Is Grounded in a Commitment to Equity, Inclusion, and Social Justice.

    Research shows that collaborations are more productive when they involve members from a diversity of groups (Maruyama et al., 2000; Chang et al., 2003; Herring, 2009) and are more able to support a diversity of perspectives (Kezar, 2013). Groups that value diversity will ensure their efforts involve faculty members from a variety of ranks and from different backgrounds whenever possible and include staff and students. Diversity should be a key consideration for any outcome the department aims to achieve, thus playing a role in collaboration and decision making. Groups that make diversity a priority will work to be aware of systematic oppression and consider the impact decisions have on marginalized populations. Grounding work in this way ensures that enacted changes impact different populations in equitable ways (Quan et al., 2019). This addresses persistent systemic issues of equity in science, technology, engineering, and mathematics (STEM) education (e.g., Martin, 2003; Gutiérrez, 2008) and is a matter of social justice.

    Departmental Action Team Project

    Six core principles underpin the DAT model, which promotes change in undergraduate education through the efforts of a departmentally based team. A DAT typically consists of four to eight department members and one or two facilitators who are external to the department. In accordance with the principles, DAT members generally represent a diverse cross-section of the department (e.g., roles, gender, race), and the model explicitly recommends that students be invited to participate as members of the team. DATs work on addressing an educational challenge in their department (e.g., increasing the sense of belonging in the program, developing a new major) while simultaneously attending to group culture, guided by the principles. Ultimately, DATs aim not only to effect change in their undergraduate programs but also in their departmental cultures as related to the six principles.

    Motivation for Developing the DELTA Survey

    Given the aim of DATs to effect cultural change in their departments, we needed an instrument that could measure such changes at scale. Hence, we developed the DELTA survey to provide data to answer the questions:

    1. What structural and cultural changes are DATs (or other change efforts) able to create, and over what timescales?

    2. How do these changes depend on departmental and institutional contexts?

    We intend for the DELTA survey to provide insight about the departmental culture at the time of distribution; when implemented over time, the responses can be compared to characterize cultural change within a department. Given that no instrument exists to capture data about departmental culture, this paper outlines the development of the DELTA survey.

    Survey Components

    To study issues of cultural change in accordance with our core principles, we developed a survey that consists of three components. The first component includes two of the six factors (leadership and collegiality) from the Survey of Climate for Instructional Improvement (SCII; Walter et al., 2015). For example, an item related to the leadership factor asks participants whether they agree on a scale of 0 (strongly disagree) to 5 (strongly agree) with the statement “The Department Chair encourages instructors to go beyond traditional approaches to teaching.” Similarly, an item related to the collegiality factor asks participants whether they agree with the statement “Instructors in my department discuss the challenges they face in the classroom with colleagues.” While the SCII focuses on climate, it is an established instrument that can speak to instructional culture, so we include it as a reference to compare with our cultural survey. The items taken from the SCII were kept intact to maintain fidelity of the leadership and collegiality constructs.

    The second component is a brief social network analysis (SNA) survey. SNA is an assortment of graph theoretic and statistical techniques for investigating the social structure of a group of people based on their interactions. Investigating different types of interactions (e.g., advice seeking, collaborative relationships) can reveal information about key players with regard to a variety of activities. For example, a comparison of five mathematics departments identified instructional leaders, those who have influence over others’ instructional practice, as distinct from simple popularity (Apkarian and Rasmussen, 2017). An example item from the SNA component of the DELTA survey asks participants to identify the people in the department with whom they discuss making changes to undergraduate education.

    Longitudinal analyses of the SNA data can provide insight into changes in social structure over time, which may be desirable in a change effort that aims to shift the culture of a department. For example, progress toward a more collaborative environment for instructors could be indicated by an increase in conversations about teaching challenges; openness to innovative pedagogical strategies could be indicated by increased requests for advice to a discipline-based educator. In addition, social network analysis can be used to understand the spread of localized efforts (Atteberry and Bryk, 2010; Daly et al., 2010; Coburn et al., 2012). Thus, the inclusion of an SNA component can speak to the spread of culture among different members of the department. In addition, it allows us to understand the relative status of DAT participants as influential in the department, which is related to their ability to act as change agents who influence undergraduate education. Educational issues are complex, however, and cannot be addressed by only considering what happens in the classroom and knowing the identity of influencers. For example, whether a department is using relevant data to inform decisions about undergraduate education (e.g., curricular structure) can have a significant impact on students’ experiences, which is why we needed to include questions on the survey that examined cultural change more broadly within the department.

    To investigate culture more broadly, we used the core principles as a framework for defining culture and as the basis for the third and final component of the survey. These questions capture participants’ perceptions of both an ideal departmental culture and the current departmental culture. Recognizing that sustained changes in undergraduate education go beyond faculty, this survey was developed with the intent to administer to many stakeholders in a department (advisors, staff, etc.). Each of these stakeholders plays a unique role in an undergraduate program and thus provides a valuable perspective that helps describe the culture around undergraduate education in a department. To deepen our understanding of a department’s culture, we found it important to ask individuals about their current perceptions as well as what they would consider to be ideal qualities of an undergraduate program. A focus on current and ideal aspects of the department allows our survey to function as a tool for formative feedback to department members, because it paints a picture of what they aspire to be, as well as where the department perceives it is falling short. These data, combined with responses from the SCII and SNA questions, provide a rich description of departmental culture that integrates a variety of perspectives.

    In Table 1, the desired data regarding departmental culture and undergraduate education are correlated to the different sections of the DELTA survey. By capturing these data, the DELTA survey can provide a pre and post departmental context in which cultural interventions are taking place. The DELTA survey in its entirety can be found in the Supplemental Material.

    TABLE 1. DELTA survey section and correlated output

    Question typeDesired information
    SCII collegiality items (question set 1 in DELTA survey)Understand collegiality between department members regarding instruction and teaching support within the department.
    SCII leadership items (question set 1 in DELTA survey)Assess the perception of leadership within the department regarding undergraduate education.
    SNA items (question set 2 in DELTA survey)Determine whether certain individuals or groups are influential in the department regarding undergraduate education, and uncover relationships between influencers and others.
    Core principle items (question set 3 in DELTA survey)Determine departmental alignment with the core principles, identify if there is a difference between reality of undergraduate education as perceived by department members and desired state of undergraduate education.

    In what follows, we describe our procedure for creating the core principle items and combining them with select SCII questions and SNA to form an instrument intended to characterize departmental culture and cultural change. Further, we will discuss the qualitative and quantitative validation of the survey for the core principle questions.

    METHODS

    Overall Process

    Our survey development followed a five-step process: 1) internal development, 2) external feedback, 3) pilot testing, 4) cognitive interviews, and 5) psychometric analysis. We note that these five steps roughly correspond to the eight-step process outlined by Bostic et al. (2019) for observation tool development, but we have incorporated the first four stages into what we call internal development. While we describe our development as a linear process, there was some overlap between each area (e.g., some expert consultations in the middle of our internal development process). This process was used to establish the validity of responses to our survey instrument. Table 2 outlines the steps we followed in the development of the DELTA survey and the stakeholders involved in the work for that step.

    TABLE 2. Steps in DELTA survey development

    Step takenParticipantsNo. of participants
    Step 1: Internal developmentProject team, education researchers7
    Step 2: External feedbackFaculty, graduate student, organizational change experts7
    Step 3: Pilot testingSTEM faculty, STEM education researchers18
    Step 4: Cognitive interviewsFaculty from multiple institutions9
    Step 5: Psychometric analysisDepartment members from multiple institutions124

    Steps 1–3 were carried out for all three components of the survey. Because the climate questions originated from a previously developed instrument, step 4 focused on the SNA and core principle questions, and step 5 only included the core principle questions in order to gather additional evidence of validity.

    Scales

    To measure changes in the core principles described earlier, we designed a number of items that ask respondents to evaluate both their current and ideal departments. Responses are quantified through the use of a six-point rating scale ranging from zero (strongly disagree) to five (strongly agree). These items use the same scale as the SCII, thus providing a consistent rating scale for participants. The six-point scale helps to separate out participants and forces a choice, as “neutral” and “not applicable” are not included as options (Bishop, 1987; Johns, 2005). Each item is associated with a single core principle, and each core principle is embodied in three unique items. We determined that three items per principle would generally cover the space of cultural phenomena that we were interested in, as this is a generally accepted minimum to achieve internal consistency (Hinkin, 1998). Thus, a total of 18 survey items were created in the final version of the survey.

    Process

    Step 1: Internal Development.

    Our project team worked iteratively to define the survey. This work began more than 3 years ago with the development of the core principles as a part of the original DAT project (Reinholz et al., 2017). The development of the core principles was grounded in a literature review of educational change in STEM education and conversations with experts, both on the project and off of the project. In this manner, the constructs of interest for the DELTA survey (the core principles) were informed by the literature. Since that development, the team had a succession of meetings and email conversations that resulted in a total of 10 revisions of the survey before pilot testing began.

    Step 2: External Feedback.

    In conjunction with our internal development process, we consulted educational experts to provide content validity (Boateng et al., 2018). At the time they were consulted, each of these experts had more than a decade of experience working in the field of educational change and were currently members of doctoral-granting institutions, and many were serving in advisory roles for other change projects. The experts provided feedback on whether the survey components represented constructs we intended to measure as well as the presentation of the survey items. Additionally, we sought initial feedback from STEM faculty and a graduate student who did not have expertise in educational change. Because our target population includes STEM faculty, this feedback contributed to providing face validity for the survey (Haynes et al., 1995).

    Step 3: Pilot Testing.

    Pilot tests were conducted with faculty and researchers from a range of backgrounds, including education, chemistry, mathematics, nursing, and evaluation. The pilot testing provided general feedback on taking the DELTA survey when it was near its final version. In total, 16 participants completed the DELTA survey, which consisted of the subgroup section asking participants to identify with a subgroup in the department, the SCII section, the SNA section, and the core principles section. Feedback from the pilot tests concerned goals of the survey, clarity of questions and survey prompts, meanings of terms, and length of time for taking the survey. This phase also contributed toward face validity by confirming whether our target population deems the survey components to be relevant to the desired goals of the survey and whether the items themselves are clear (Haynes et al., 1995).

    Step 4: Cognitive Interviews.

    Finally, cognitive interviews were conducted in the final stages of survey revision to provide additional evidence of validity. Participants for the cognitive interviews were from a diverse set of institutions, including a public land-grant university, a private religious college, and two private liberal arts colleges. Although the majority of the participants were tenure-track faculty, one senior instructor was also interviewed, and all came from STEM departments. The participants for the cognitive interviews did not overlap with the participants from the pilot-testing step. The interviews covered the SNA (component 2) and core principle (component 3) items. These interviews were conducted by C.N., G.Q., D.R. of our team members, using a think-aloud protocol with additional prompting of respondents. The cognitive interviews provided fine-grained feedback on how participants understood our scales and informed our final revisions. As recommended by others (Beatty and Willis, 2007), our interviews were conducted in rounds until diminishing returns in value added to the survey development were observed. Additionally, the sample size of nine participants falls within the accepted range of 5 to 10 participants for investigating usability of an instrument (Macefield, 2009).

    Step 5: Psychometric Analysis.

    With the nearly finalized survey developed, we broadly administered the items to participants to support psychometric analysis. We collected data from 124 participants who had a broad range of backgrounds representative of our target population for the DELTA survey. Although not all participants chose to provide demographic information, for those who did, most participants came from a doctoral-granting university, although at least 10% of participants reported coming from institutions classified as master’s- or baccalaureate-granting institutions. More than half of the participants reported being members of STEM departments, 10% were members of education departments, 5% reported being from centers for teaching and learning (CTLs), and another 5% identified themselves as members of non-STEM departments.

    We performed exploratory factor analysis (EFA) to examine the ways in which the core principle items related to each other. The goal of creating factors was to understand which items could be averaged together, so that a department could be provided with a few summary numbers, rather than 18 numbers in total (for 18 items). We also investigated internal consistency, using Cronbach’s alpha, to check whether the relationships between items could reliably be found. Because our sample size exceeded 100 participants, this step serves to provide reliability evidence for the DELTA survey (Guadagnoli and Velicer, 1988).

    RESULTS AND ANALYSIS

    Internal Development

    The DAT project seeks to promote a culture that is guided by core principles, so the majority of the internal development phase focused on refining the language around the core principles. The initial version of the core principles was developed by working with the organizational and educational change experts of the original DAT project (Corbo et al., 2016). The language and focus for the principles were sharpened through iterative revisions made by the project team. Three questions were produced for each core principle and correspond to behaviors, values, and artifacts related to the core principle. For example, the first core principle states: “Students are partners in the educational process.” As previously articulated, this core principle focuses on students’ active involvement in departmental decision making and their ownership over their education. The three DELTA survey questions associated with this principle are:

    1. Students actively contribute to departmental decision making around undergraduate education.

    2. Faculty and staff actively seek out student input about the department on an ongoing basis.

    3. Students see themselves as having a say in how departmental decisions are made.

    We discuss the ability of the DELTA survey to capture the complexity of these core principles when we discuss our factor analysis results.

    During this stage, we consulted other surveys that informed the content and structure of the DELTA survey. The PULSE Vision & Change Rubrics (Brancaccio-Taras et al., 2016) measure the progress of change aligned with the Vision and Change recommendations. Several sections of questions were considered for inclusion in the DELTA survey, including questions related to faculty practice and climate for change. Ultimately, these questions were removed from the DELTA survey to decrease its overall length and because these topics were addressed in other sections of the DELTA survey. As mentioned previously, the SCII (Walter et al., 2015) was also explored during this phase, and items for two of the factors from the SCII have been included in the DELTA survey.

    External Feedback

    During the development of the DELTA survey, feedback was solicited from academics. Three faculty members and one graduate student provided general comments on the survey from the perspective of department members who are asked to take a survey about undergraduate education in their departments. This feedback largely focused on clarity of the language regarding item intent (e.g., when an item asks about recruiting, does “recruiting” refer to the people doing the recruiting or those being recruited?), privacy concerns (e.g., participants might feel uncomfortable providing names of people in the department for the SNA component of the survey), and whether participants will have the knowledge to answer the items (e.g., how evaluations for tenure are conducted). Changes resulting from this feedback included revisions of the item language to be more specific and the addition of text and an example to explain how the data from the SNA items would be analyzed and protected.

    Three additional faculty whose research interests involve change in higher education were also consulted. These experts in higher education change provided more targeted feedback about the DELTA survey, including recommendations on other validated surveys to draw inspiration from, modifications to the language used in the questions, comments on how the questions might be interpreted, and overall structure of the survey. These experts have experience designing and validating surveys and influenced aspects of other steps taken during the development of the DELTA survey (e.g., questions asked during cognitive interviews, participants chosen for pilot testing). These experts in higher education were consulted at different intervals throughout the development of the DELTA survey so that they could comment on the revisions.

    One expert recommended we review the Colorado Learning Attitudes about Science Survey for Experimental Physics (Zwickl et al., 2012). This survey asks participants to respond to items in two ways: first in terms of how the participants think about the provided statement followed by how the participants think physics experts would think about the statement. This structure inspired the format of our core principle questions; participants are given a statement related to a core principle and are first asked to respond to the statement in the context of their current departments followed by the context of their ideal departments. Please see the Supplemental Material to view the format of these questions.

    Pilot Testing

    Another major phase in the development of the DELTA survey involved pilot testing. Before the pilot testing, a majority of the feedback came from participants with backgrounds in education research. Because the DELTA survey is intended to be given to department members with varying backgrounds and research interests, the pilot testing deliberately included an equal mix of participants who did and did not have expertise in education research. All modifications resulting from this phase were made to the SNA or core principles section, as we wished for the questions taken from the SCII to remain identical to the original survey. Major edits included removing a section that asked participants to identify with subgroups, due to confusion over how to respond to the item; adding clarifying instructional text to certain sections; and refining the wording of the core principle items.

    Cognitive Interviews

    Cognitive interviews provided a source of fine-grained feedback about the survey. There were three rounds of cognitive interviews, with a total of nine participants, and several of the authors C.N., G.Q., D.R. conducted the interviews. The participants were all faculty with a mix of backgrounds; some faculty conducted STEM education research, while the rest worked in other content areas, and some faculty had been a part of a DAT, while others were unaware of this project before being asked to take part in this interview. Our cognitive interviews focused on the SNA items and core principle items. The participants were asked to think aloud as they completed the survey and were prompted by interviewers for elaboration as appropriate following recommended guidelines (Beatty and Willis, 2007). The interviewers asked clarifying questions as the participants responded to each survey item, primarily regarding interpretation of the item, to ensure that the reasoning behind the participants’ answers was captured. At the completion of the survey, the interviewer asked follow-up questions regarding overall impressions of the survey, the organization of the survey, and suggested modifications to the survey.

    The first round of interviews prompted changes in the phrasing of the core principle items. These changes included adding specificity regarding the term “stakeholders” as well as replacing what may be considered educational jargon with more straightforward language (e.g., replacing “marginalized populations” with “underrepresented populations”). Many of the major revisions that occurred were put into place after the second set of cognitive interviews and included making the Likert-scale items forced response (climate questions and core principle questions), adding an additional social network analysis item (“I discuss making changes about undergraduate education with the following people outside of the department and institution”), and adding a comment box for survey participants if they wished to include more information. These changes were in response to participants thinking aloud about additional details they would like to include to accompany their responses (e.g., the original SNA items only asked about relationships within a department, and the other survey items are Likert response and did not provide any mechanism for elaboration).

    The impact of the changes was probed during the following set of interviews; thus, the third round of cognitive interviews (n = 5) served to check the revisions of the previous two rounds. Although the changes were not explicitly pointed out in interviews, the authors took note of whether the changes stood out to participants. Overall, the participants found the content and organization of the survey to be straightforward, and this round of cognitive interviews primarily provided evidence of ontological authentication of the survey.

    All of the cognitive interviews provided evidence that the survey items were being interpreted as intended and that there was no “correct” answer. This serves as a source of validation for the results of the DELTA survey. Although participants provided a wide range of responses to the survey items, their interpretations of what the items were asking them were consistent. In some cases, the authors noted that interpretations were context dependent (e.g., “department members” could mean different groups of people depending on the institution and department). Because the survey is intended to capture the complexities of departmental culture, the authors did not further define groups such as department members so that participants could interpret the item based on the contexts of their departments. Additionally, the participants in the cognitive interviews had a range of responses to the survey, so it is evident that the survey allows for a diversity of opinions to be captured and does not lead participants to respond in a particular manner.

    Factor Analysis

    After incorporating revisions from the cognitive interviews, we collected responses from a total of 124 participants to perform EFA on the core principle items (and later a confirmatory factor analysis [CFA]). The participants for this step were all self-identified department members from a multitude of departments (29 unique departments reported by participants) and institutions (43 unique institutions reported by participants). The participants represented a diversity of the targeted population for this survey (e.g., not only tenure-track faculty).

    We used the fa package in R Statistics to perform the EFA. To determine whether to use an oblique or orthogonal rotation, we looked at the correlation matrix for participant responses to the factor questions. We found that a number of correlations exceeded 0.32 (or approximately 10% of the variance between factors; Brown, 2009), so an orthogonal rotation was not appropriate, because that would assume the factors were uncorrelated. Accordingly, we used an oblique rotation (oblimin).

    We used an ordinary least-squares minimum residual extraction method. While there is no standard cutoff for removing factors from EFA, a generally recognized minimum is that factors with a loading of less than 0.3 are dropped (Costello and Osborne, 2005). We chose to go with a more stringent range, as suggested by Comrey and Lee (1992), who recommend the following guidelines: 0.32 (poor), 0.45 (fair), 0.55 (good), 0.63 (very good), or 0.71 (excellent). We began with a six-factor model, because we had six core principles. However, in adopting the guideline of 0.45 as a cutoff for a “fair” fit for our factors, we found that six of the 18 items needed to be discarded. For these items, the loadings were typically split between two to three other factors, so there was not a strong enough loading on any single factor. Removing these from the model, we instead created a four-factor model that had at least fair fit for all of the items, and good or excellent for most of them. In this model, all items that remained loaded only onto a single factor. While at least one item from each core principle remained in the model, principles 1, 3, 4, and 6 are more strongly represented. The model can be found in Table 3.

    TABLE 3. Factors retained in the four-factor model

    (Associated core principle) ItemFactorLoading
    (P1) Students actively contribute to departmental decision making around undergraduate education.1(A)0.766
    (P1) Faculty and staff actively seek out student input about the department on an ongoing basis.1(B)0.642
    (P1) Students see themselves as having a say in how departmental decisions are made.1(C)0.696
    (P2) The department revisits and updates its vision over time.2(A)0.455
    (P3) The department collects multiple forms of evidence about undergraduate education on an ongoing basis.2(B)0.781
    (P3) Data collection, analysis, and interpretation inform departmental decision making about undergraduate education.2(C)0.830
    (P4) All department members are collaborators with equitable access to contribute to decision making.3(A)0.513
    (P4) Department members interact with one another in functional and productive ways.3(B)0.886
    (P5) Department members view change as an ongoing process rather than an event (e.g., they believe that complex problems require continued attention to stay solved).3(C)0.499
    (P6) The department intentionally recruits a diverse membership (e.g., with respect to gender identity, race, ethnicity).4(A)0.728
    (P6) Department members consider the impact of their decisions on underrepresented populations.4(B)0.850
    (P6) Department members feel a sense of personal responsibility toward improving inclusion in the department.4(C)0.563

    Table 4 shows the six items that were dropped from the model. These items came primarily from principles 2 and 5, which focus on building a shared vision and continuous improvement.

    TABLE 4. Items dropped from the original model

    (P2) Department members use a shared vision to guide work aimed at achieving change.
    (P2) The process of developing the department’s vision includes a diversity of relevant stakeholders.
    (P3) Department members actively and regularly identify and avoid bias (e.g., confirmation bias, relying on anecdote) when interpreting data about undergraduate education.
    (P4) The department develops community through activities such as eating together and having celebrations.
    (P5) When making changes to the department, department members explicitly attend to the long-term sustainability of those changes.
    (P5) Department members regularly reflect on how the department can be improved.

    For the proposed model from the EFA, we tested whether this proposed model was a good fit for our data. We found the root-mean-square of the residuals is 0.03, which is near 0, as desired. We also found that the root-mean-square error of approximation (RMSEA index) is 0.05, which indicates an acceptable fit (Hu and Bentler, 1999). Finally, the Tucker-Lewis index (TLI) of reliability is 0.973, which is acceptable, as it is greater than 0.9. In addition, after generating the model, we also computed Cronbach’s alpha for each of the factors (Cronbach, 1951). All values were greater than 0.7, which is generally considered acceptable reliability (Nunnaly, 1978). Examination of the four emergent factors, along with the content of the dropped items, revealed a number of themes.

    Factor 1: Students as Partners.

    The first factor was comprised solely of the items related to core principle 1, and thus remained intact as “students are partners in the educational process.” We understand this factor to be based on the belief that students are given decision-making power and agency in their own education and are viewed as equal partners rather than clients.

    Factor 2: The Department Is Continuously Working to Improve Undergraduate Education.

    The second factor is a partial blend of core principles 2 (work focuses on achieving collective positive outcomes) and 3 (data collection, analysis, and interpretation drive decision making). This factor considers the department’s efforts to improve education in an evidence-based manner. As data are collected to inform changes, the department’s vision for undergraduate education is subsequently updated. The items related to core principle 2 that did not load onto this factor involve the development and use of a collective vision for undergraduate education. Although many undergraduate programs have a mission statement, it is unusual for department members to explicitly participate in a collective visioning process or to spend time using a shared vision to inform decision making. We hypothesize that, because department members do not typically participate in collective visioning, they responded to these items in a different manner than when they responded to the items that loaded onto factor 2. Similarly, the final item related to principle 3 involved practices in which department members may not regularly engage (identifying and mitigating bias regarding undergraduate education data), and thus participants did not respond to this item in a similar manner as when they responded to the other two items related to core principle 3.

    Factor 3: Collaboration between Department Members Promotes Change.

    Factor 3 focuses on the ways in which department members interact in the process of making changes to undergraduate education. This factor comprises two items based on principle 4 (collaboration between group members is enjoyable, productive, and rewarding) and one item based on principle 5 (continuous improvement is an upheld practice). The components underlying this factor are that department members must work together to make change and that ongoing collaboration is necessary for change. The remaining item related to principle 4 that was dropped in the analysis focuses on how a community is created, and we infer that participants separated the concept of collaboration from community. The two items tied to principle 5 that were dropped from this factor hone in on mechanisms of sustainability and reflection. These items in particular each represent unique concepts that participants may not perceive as contributing to our proposed overarching theme of continuous improvement, which may explain why these items were dropped from the model.

    Factor 4: Work Is Grounded in a Commitment to Equity, Inclusion, and Social Justice.

    This factor was composed solely of the three items related to core principle 6 and captures a perspective related to diversity and inclusion. This factor includes actions department members can take regarding equity, diversity, and inclusion, such as considering the impact of decisions on underrepresented populations and deliberately recruiting diverse membership.

    Our next step was to run a CFA of the proposed factor structure. We began by plotting histograms of all of the variables and confirmed that they were approximately normal, so they could be used in the CFA without transformation. We used the lavaan package 0.6-5 in R statistics to perform the CFA, with the full information maximum likelihood estimator for missing data. We also standardized all latent factors to have a mean of 0 and variance of 1, for ease of interpretation.

    We followed the recommendations of Schreiber et al., (2006) for interpreting the fit of the CFA model. Those authors suggest target values for fit indices of: the nonnormed Tucker-Lewis index (TLI > 0.95), comparative fit index (CFI > 0.95), and RMSEA < 0.6 to 0.8. For our model, we found TLI = 0.981, CFI = 0.986, and RMSEA 0.039 with a 90% confidence interval of [0, 0.073]. These indices indicate that the model had a good fit. The factor loadings for the model are given in Table 5, and residual covariances are given in Table 6.

    TABLE 5. Factor loadings for the CFA

    Latent factorIndicatorBSEZp valueBeta
    Students as partners1A0.7950.1007.98700.709
    1B0.8630.0969.03200.787
    1C0.8170.1077.65400.681
    Continuous improvement2A0.8640.1147.59700.662
    2B0.9080.0999.13700.752
    2C1.0520.09810.70200.854
    Collaboration3A0.9690.1218.03000.676
    3B1.0590.09910.65700.831
    3C0.9730.09510.26700.807
    Equity4A0.8500.1038.25300.690
    4B1.0150.1069.56800.769
    4C1.1330.10011.35400.869

    TABLE 6. Covariance between residuals

    1A1B1C2A2B2C3A3B3C4A4B4C
    1A
    1B−0.011
    1C0.027−0.005
    2A−0.0620.050−0.011
    2B0.1190.001−0.014−0.052
    2C−0.0170.023−0.095−0.0210.034
    3A0.1970.0000.143−0.021−0.089−0.176
    3B−0.1120.001−0.0580.140−0.1040.0300.067
    3C−0.0950.0850.0010.150−0.0170.039−0.065−0.004
    4A0.0100.043−0.0970.062−0.199−0.035−0.050−0.0410.016
    4B0.006−0.053−0.0240.187−0.002−0.089−0.036−0.0650.0180.106
    4C0.0120.019−0.0060.1190.060−0.0130.077−0.0050.015−0.036−0.024

    Using the DELTA to Characterize a Department

    To illustrate the use of the DELTA survey, we provide results from one department, which we refer to as the Herbs department. We received responses from 12 members of the Herbs department. Figure 1 provides the distributions of each of the scales for the factors that emerged for the core principle questions along with the responses to the SCII questions about collegiality and leadership. The box plot depicts the median, range of each quartile, and outliers for the set of responses to each question.

    FIGURE 1.

    FIGURE 1. Principles for change and SCII responses in the Herbs department.

    Figure 1 contains the compiled responses for the Herbs department for the core principle factors and the climate questions. The responses for the core principles are separated into “actual department” and “ideal department.” The responses to the SCII questions are separated into collegiality (SCII_Inst) and leadership (SCII_Dept).

    The data in Figure 1 can be used to provide formative feedback to a department. We see that, for each of the four factors pertaining to principles, there are discrepancies in how the participants viewed the state of the actual department versus their notions of an ideal department. For example, the first construct relates to principle 1: students are partners in the educational process. On average, participants disagreed that this was the case in the department (mean = 1.69, median = 1.66), but overall they agreed that this would be the case in an ideal department (mean = median = 3.16). When looking at responses related to factor 3, the discrepancy between actual and ideal is smaller (mean difference = 0.83; median difference = 1.17); participants generally agreed that people are currently colla­borating to promote change, and for the ideal department there is only slightly more agreement. Comparing participants’ responses to these factors can guide the work of change efforts in the department; for example, the large discrepancy between current and ideal departments for the students as partners construct suggests that department members find value in creating an environment where students are truly viewed as partners, but do not find that this currently exists in the department. This suggests that future work that seeks to engage students as partners would be supported by department members.

    The responses to the SCII questions for the Herbs department reveal that, for the collegiality and leadership constructs, the climate for instructional practices is relatively positive. Most respondents agreed that a respectful instructional community exists in which members that value one another and that the leadership’s messaging values teaching and instructional improvement. These responses can be supplemented with the SNA data, which identify the possible instructional community within the department that talks about undergraduate education, shown in Figure 2. In the case of the Herbs department, the SNA corroborates the responses to the leadership questions; in the social network, the department chair is clearly identified as someone central to conversations around undergraduate education.

    FIGURE 2.

    FIGURE 2. Social network of participants engaged in conversations about undergraduate education. An arrow points from the person answering the question to the person who is identified in response to the question (e.g., a gray triangle with an arrow pointing to a black square but no arrows pointing away from it means a non-DAT assistant professor identified a DAT member associate professor as someone he or she talks to about undergraduate education). If there are no arrows pointing away from a person, that person did not participate in the survey but was identified in the responses. This social network graphic was created using the program NodeXL Basic.

    Qualitatively, the social network derived for the Herbs department in response to the question, “I discuss making changes about undergraduate education in the department with the following department members (list full names)” can be used to make several inferences. First, the primary contributors to conversations about undergraduate education have been identified by participants as several full professors (diamond shape), the department chair (labeled), a senior instructor (white square), and an academic advisor (white circle). It can be inferred that these department members are likely to have influence for efforts regarding undergraduate education. Deliberately including some of these influencers in change efforts such as DATs can help increase the likelihood of positive and sustained change resulting from change initiatives.

    Second, the diversity of roles with respect to who is contributing to conversations about undergraduate education can be examined using the social network created from the responses to the first question. Although there are many central people in this network, when compared with the number of people and roles in the entire department, it is clear that there are some people who are absent from this network. For example, only one senior instructor was identified in the responses, but there are multiple instructors in the department. Additionally, it is apparent that assistant professors are on the periphery of this social network, indicating that they may not participate in conversations about undergraduate education. In both cases, many implicit factors may have an influence. Assistant professors may be encouraged to focus their attention solely on research, and senior instructors may not be invited to spaces where faculty typically come together (e.g., faculty meetings). Regardless of the causes, this suggests that there are voices in the department that are not being heard in the conversations about undergraduate education and can prompt a department to consider the ways in which department members are included in conversations and decision-making processes regarding undergraduate education.

    DISCUSSION

    Limitations

    Results from the DELTA survey serve to characterize the culture of a department. At this time, the DELTA has not been modified or validated with undergraduate students, as the content is strongly related to departmental processes of which undergraduate students may be unaware. The results can be used to gain a better understanding of departmental perceptions around undergraduate education and departmental culture, but these results must be situated in the context of the department and institution. Inferring meaning from DELTA survey responses should be limited to the constructs of the survey. The DELTA survey responses can contribute to a holistic understanding of departmental culture regarding undergraduate education, particularly in terms of beliefs, values, and artifacts, but should not be used as the sole representation of the department.

    Although the survey was carefully designed to be nonthreatening while still capturing accurate data about departmental culture, there are still several reasons why participants may choose not to respond honestly, thoroughly, or at all. Participants may feel uncomfortable with listing names of people they communicate with about education, or they may feel that their conversations with colleagues are not relevant for this survey. Some participants may not honestly answer questions about their department chairs for fear of retribution, despite the anonymity associated with the survey. Participants may respond falsely to the questions about their visions of an ideal department because they think there is a “right” answer. Pilot testing and preliminary data indicate that these issues were addressed in our revisions and did not surface during subsequent data collection, but it is possible that other department members might not respond the same way. Additionally, although there was significant testing via participant feedback and interviews to examine the ways in which participants are interpreting the survey questions, some of the questions might elicit specific responses due to the nature of the department (e.g., “department members” includes different groups of people depending on the department). For the purposes of our own work, we chose to not define these groups, as we prefer that the departmental perspective on how these groups are defined prevail in the survey responses; however, other researchers might choose to be more specific in their definitions.

    Finally, given the limitations of our sample size, we performed EFA and CFA on the same sample. The results from these analyses suggest that the factors we have uncovered are robust; however, a larger sample size in the future will enable verification of the factors that were revealed during this developmental process.

    Interpretation of Core Principle Responses

    There are many reasons as to why our emergent factors did not align precisely with the theory-based core principles. For the items that were dropped, their loading typically split across multiple factors, so they did not load strongly enough on any single factor to be retained. This suggests that the concepts contained within the item were not singular or clearly within the boundaries of one principle. As the authors of the paper on the principles outline (Quan et al., 2019), the core principles are both complex and overlapping. The core principles are intended to characterize departmental culture, and culture is not easily siloed into distinct factors (or principles), and the results of the factor analysis confirm that overlap exists. Furthermore, although we developed the recommended minimum number of items per construct, it is possible that if we had developed additional items for each construct, more items would have loaded onto factors that aligned exactly with the core principles. Finally, the activities and perspectives reflected in the core principles may not be a standard part of departmental culture, and thus a lack of familiarity with the content of some of these items might lead participants to respond with greater variability. For example, if a department does not regularly engage in using data to inform decision making, participants may view each item associated with the core principle about data as distinct. Exploration of responses to the core principle items from participants who are also DAT members might reveal stricter adherence to the core principles.

    Although six factors aligning with the core principles did not emerge from the analysis, the four factors that were revealed can still provide insight about a department’s culture regarding undergraduate education. These factors can be used to quantitatively represent shifts in departmental culture over time in terms of four distinct constructs. Although we recognize that departmental culture is multifaceted, the responses to the four factors can be used to characterize culture in a more defined manner. For example, comparing participants’ responses to a single factor (e.g., factor 1, “students as partners”) for the current versus ideal department can reveal discrepancies in how department members perceive the current culture of the department versus what they believe an ideal culture would look like. These comparisons can provide a natural starting point for educational change efforts in departments: if participants are noting a discrepancy between current and ideal, it is likely they would find value in efforts that work to close that gap. Responses to each construct can also be used to track changes in department members’ views of the current departmental culture over time.

    The DELTA survey in its entirety, with all 18 core principle items, can provide a more fine-grained picture of departmental culture. The core principles are grounded in research on the culture of high-functioning teams, and each of the developed items contributes knowledge about an aspect of departmental culture. The results from steps 1–4 of our development process suggest that participants interpret these items in a consistent manner, which indicates the items can still be used individually to learn more about a department’s culture. For example, although the item “When making changes to the department, department members explicitly attend to the long-term sustainability of those changes” did not load onto a single factor and therefore was not retained, responses to this item by itself can be monitored over time to observe changes in participants’ views about the sustainability of change efforts. Additionally, responses to this item can prompt follow-up questions via interviews or conversations at departmental meetings when change efforts are underway to more deeply probe participants’ understandings and perspectives.

    Utility for Research

    The DELTA survey can be used by researchers to gain a new understanding of departmental and institutional culture around undergraduate education that can be used to drive research decisions and uncover potential areas for further research. In a formative manner, researchers may use the results of the DELTA survey to design interventions or scaffold change in a department. Low scores for any of the components can be used as potential foci for change efforts, and discrepancies between responses for the core principles current department versus ideal department contexts may suggest that participants would support changes related to those principles.

    The DELTA survey can deliberately be used as a pre- and postsurvey for departments that would like to assess the impact of a particular change or intervention on departmental culture around undergraduate education. If the survey is given at multiple points in time, the social network can be used to infer whether a change initiative or training had an impact on who is perceived as an influencer regarding undergraduate education in the department. For example, if assistant professors participate in a DAT and become more familiar with education literature and best practices for instruction, they may become more involved in the conversations about undergraduate education. Thus, the social network analysis can be used as one data point to infer the impact of a change effort. Responses to the core principle questions can also be used to assess change in departmental culture; if given before and after a change effort, examining the gap(s) between participants’ responses to their current departments versus their ideal departments can reveal whether the culture has shifted in a positive direction.

    Because the DELTA survey has been validated by participants from a diverse set of departments, responses can be used in a comparative manner as well. Results from the DELTA survey can be used to compare departments within an institution or to compare departments across institutions to highlight major commonalities or differences regarding culture around undergraduate education. If the DELTA survey is distributed in multiple departments at one institution, responses have the potential to generally characterize an institution. It is important to note, however, that the results of the DELTA survey must be interpreted through the lens of both departmental and institutional contexts.

    Utility for Practitioners

    The DELTA survey was developed to characterize a department’s culture regarding undergraduate education, and it has many potential uses for department members at any level and for staff in CTLs. Looking at responses to the SCII questions can provide insights on instructional climate. Responses to the collegiality construct can indicate whether an appropriate climate exists for instructor input to be valued and used to inform change efforts. Knowing the role of the department chair in climate for instructional practices also contributes to understanding the overall departmental culture for undergraduate education. Evidence of leadership support (or lack of support), as indicated by responses to this construct, is useful for those wishing to promote change in undergraduate education, as leadership support is sometimes essential for the success and sustainability of change initiatives.

    Participant responses during the cognitive interviews indicated that taking this survey influenced their thinking about departmental culture. One purpose of deploying this survey in the future, then, can be as a catalyst to prompt participants to consider culture and decision making around undergraduate education in the department. To build on this, responses to the SCII and core principle items can be used to inform changes a department wishes to make regarding undergraduate education. The results of the SNA section can inform the selection of a team that has the capacity and standing within the department to make changes to undergraduate education. Analyzing the differences between department members’ perceptions of the actual versus ideal department can help guide the department toward creating a unified vision for its undergraduate program. CTL staff can implement the DELTA survey before they begin working with a department to reveal aspects of departmental culture. They can also use the survey results to measure their own impact as they work with a department to improve undergraduate education.

    Conclusion and Future Work

    The DELTA survey provides a unique method for characterizing the culture of a department. This article offers insight into the development and validation of the DELTA survey as well as suggested uses. Although this process took many steps to ensure rigor, future work can include collecting data from a larger sample of participants and conducting CFA to verify whether the four-factor structure still holds. Furthermore, the DELTA survey is validated only for department members and not students; thus, a version of the DELTA survey that provides the students’ perspectives on departmental culture is an area for future work.

    ACKNOWLEDGMENTS

    We would like to thank the many participants who contributed to the development of this survey with their responses and thoughtful feedback. This work was supported by the National Science Foundation (DUE-1626565). We would also like to thank Andrea Beach, Stephanie Chasteen, and Noah Finkelstein for their feedback during the development process.

    REFERENCES

  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action. Washington, DC. Google Scholar
  • Apkarian, N., & Rasmussen, C. (2017). Mathematics instruction leadership in undergraduate departments. In Weinberg, A.Rasmussen, C.Rabin, J.Wawro, M.Brown, S. (Eds.), Proceedings of the 20th annual conference of research in undergraduate mathematics education (pp. 485–493). San Diego. Google Scholar
  • Association of Public and Land-Grant Universities. (2016). Student engagement in mathematics through an institutional network for active learning (SEMINAL). Retrieved January 25, 2018, from www.aplu.org/projects-and-initiatives/stem-education/seminal/index.html Google Scholar
  • Atteberry, A., & Bryk, A. S. (2010). Centrality, connection, and commitment: The role of social networks in school-based literacy. In Daly, A. J. (Ed.), Social network theory and educational change (pp. 51–76). Cambridge, MA: Harvard Education Press. Google Scholar
  • Bandura, A. (2006). Toward a psychology of human agency. Perspectives on Psychological Science, 1(2), 164–180. MedlineGoogle Scholar
  • Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59(1), 617–645. MedlineGoogle Scholar
  • Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71, 287–311. Google Scholar
  • Bishop, G. (1987). Experiments with the middle response alternative in survey questions. Public Opinion Quarterly, 51, 220–232. Google Scholar
  • Boaler, J., & Greeno, J. G. (2000). Identity, agency, and knowing in mathematics worlds. In Boaler, J., (Ed.), Multiple perspectives on mathematics teaching and learning (pp. 171–200). Westport, CT: Ablex. Google Scholar
  • Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6, 149. MedlineGoogle Scholar
  • Bostic, J. D., Matney, G. T., & Sondergeld, T. A. (2019). A validation process for observation protocols: Using the Revised SMPs Look-for Protocol as a lens on teachers’ promotion of the standards. Investigations in Mathematics Learning, 11(1), 69–82. DOI: 10.1080/19477503.2017.1379894 Google Scholar
  • Brancaccio-Taras, L., Pape-Lindstrom, P., Peteroy-Kelly, M., Aguirre, K., Awong-Taylor, J., Balser, T., ... & Zhao, J. (2016). The PULSE Vision & Change rubrics, version 1.0: A valid and equitable tool to measure transformation of life sciences departments at all institution types. CBE—Life Sciences Education, 15(4), ar60. LinkGoogle Scholar
  • Brown, J. D. (2009). Choosing the right type of rotation in PCA and EFA. JALT Testing & Evaluation SIG Newsletter, 13(3), 20–25. Google Scholar
  • Bryk, A. S., Gomez, L. M., & Grunow, A. (2011). Getting ideas into action: Building networked improvement communities in education. In Hallinan, M. T. (Ed.), Frontiers in sociology of education (pp. 127–162). New York: Springer. Google Scholar
  • Chang, M. J., Witt, D., Jones, J., & Hakuta, K. (2003). Compelling interest: Examining the evidence on racial dynamics in colleges and universities. Stanford, CA: Stanford University Press. Google Scholar
  • Chasteen, S. V., Perkins, K. K., Code, W. J., & Wieman, C. E. (2016). The science education initiative: An experiment in scaling up educational improvements in a research university. In Weaver, G. C.Burgess, W. D.Childress, A. L.Slakey, L. (Eds.), Transforming institutions: Undergraduate STEM education for the 21st century (pp. 125–139). West Lafayette, IN: Purdue University Press. Google Scholar
  • Cobb, P., Confrey, J., Disessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13. Google Scholar
  • Coburn, C. E., Russell, J. L., Kaufman, J. H., & Stein, M. K. (2012). Supporting sustainability: Teachers’ advice networks and ambitious instructional reform. American Journal of Education, 119(1), 137–182. Google Scholar
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis. Hillsdale, NJ: Erlbaum. Google Scholar
  • Cooperrider, D., Whitney, D., & Stavros, J. M. (2008). The appreciative inquiry handbook: For leaders of change. San Francisco, CA: Berrett-Koehler. Google Scholar
  • Corbo, J. C., Reinholz, D. L., Dancy, M. H., Deetz, S., & Finkelstein, N. (2016). Framework for transforming departmental culture to support educational innovation. Physical Review Physics Education Research, 12(1), 010113. Google Scholar
  • Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7), 1–9. Google Scholar
  • Cox, M. D., (2004). Introduction to faculty learning communities. In Cox, M. D.Richlin, L. (Eds.), Building Faculty Learning Communities (pp. 5–23). John Wiley & Sons. Google Scholar
  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. Google Scholar
  • Daly, A. J., Moolenaar, N. M., Bolivar, J. M., & Burke, P. (2010). Relationships in reform: The role of teachers’ social networks. Journal of Educational Administration, 48(3), 359–391. Google Scholar
  • Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. Google Scholar
  • Elrod, S., & Kezar, A. (2015). Increasing student success in STEM: A guide to systemic institutional change. A Keck/PKAL Project at the Association of American Colleges & Universities. www.aacu.org/sites/default/files/files/pkalkeck/casestudies.pdf Google Scholar
  • Fairweather, J. (2008). Linking evidence and promising practices in science, technology, engineering, and mathematics (STEM) undergraduate education. A status report for the National Academies National Research Council Board of Science Education. www.nsf.gov/attachments/117803/public/Xc–Linking_Evidence–Fairweather.pdf Google Scholar
  • Fry, C. L. (Ed.). (2014). Achieving systemic change: A sourcebook for advancing and funding undergraduate STEM education. Washington, DC: Association of American Colleges and Universities. Retrieved January 25, 2018, from www.aacu.org/sites/default/files/files/publications/E-PKALSourcebook.pdf Google Scholar
  • Gharajedaghi, J. (2006). Systems thinking: Managing chaos and complexity (2nd ed.). Burlington, MA: Elsevier. Google Scholar
  • Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265–275. MedlineGoogle Scholar
  • Gutiérrez, R. (2008). A “gap-gazing” fetish in mathematics education? Problematizing research on the achievement gap. Journal for Research in Mathematics Education, 39(4), 357–364. Google Scholar
  • Haynes, S. N., Richard, D., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7(3), 238–247. Google Scholar
  • Henderson, C., Beach, A., & Finkelstein, N. (2011). Facilitating change in undergraduate STEM instructional practices: An analytic review of the literature. Journal of Research in Science Teaching, 48(8), 952–984. Google Scholar
  • Henderson, C., Dancy, M., & Niewiadomska-Bugaj, M. (2012). Use of research-based instructional strategies in introductory physics: Where do faculty leave the innovation-decision process? Physical Review Special Topics—Physics Education Research, 8(2), 020104. Google Scholar
  • Herring, C. (2009). Does diversity pay? Race, gender, and the business case for diversity. American Sociological Review, 74(2), 208–224. Google Scholar
  • Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. Google Scholar
  • Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. Google Scholar
  • Johns, R. (2005). One size doesn’t fit all: Selecting response scales for attitude items. Journal of Elections, Public Opinion & Parties, 15(2), 237–264. Google Scholar
  • Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. Google Scholar
  • Kezar, A. (2013). How Colleges Change: Understanding, Leading, and Enacting Change Routledge. Google Scholar
  • Kotter, J. P. (1996). Leading change. Boston, MA: Harvard Business Review Press. Google Scholar
  • Lave, J. (1996). Teaching as learning, in practice. Mind, Culture, & Activity, 3(3), 149–164. Google Scholar
  • Lave, J., & Wenger, E. (1998). Communities of practice. Cambridge, UK: Cambridge University Press. Google Scholar
  • Lee, J. J. (2007). The shaping of the departmental culture. Journal of Higher Education Policy and Management, 29(1), 41–55. Google Scholar
  • Lee, V. S., Hyman, M. R., & Luginbuhl, G. (2007). The concept of readiness in the academic department: A case study of undergraduate education reform. Innovative Higher Education, 32(1), 3–18. Google Scholar
  • Lerman, S. (2000). The social turn in mathematics education research. In Boaler, J. (Ed.), Multiple perspectives on mathematics teaching and learning (pp. 19–44). Westport, CT: Ablex Publishing. Google Scholar
  • Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher, 44(1), 54–61. Google Scholar
  • Macefield, R. (2009). How to specify the participant group size for usability studies: A practitioner’s guide. Journal of Usability Studies, 5(1), 34–45. Google Scholar
  • Martin, D. B. (2003). Hidden assumptions and unaddressed questions in mathematics for all rhetoric. Mathematics Educator, 13(2), 7–21. Google Scholar
  • Maruyama, G., Moreno, J. F., Gudeman, R. H., & Marin, P. (2000). Does diversity make a difference? Three research studies on diversity in college classrooms. Washington, DC: American Council on Education and American Association of University Professors. Google Scholar
  • Moll, L. C., Amanti, C., Neff, D., & Gonzalez, N. (1992). Funds of knowledge for teaching: Using a qualitative approach to connect homes and classrooms. Theory Into Practice, 31(2), 132–141. Google Scholar
  • Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill. Google Scholar
  • Quan, G. M., Corbo, J. C., Falkenberg, K., Finkelstein, N., Geanious, C., Ngai, C., … & Wise, S. (2019). Designing for institutional transformation: Six principles for department-level interventions. Physical Review Physics Education Research, 15(1), 010141. Google Scholar
  • Reinholz, D. L., & Apkarian, N. (2018). Four frames for systemic change in STEM departments. International Journal of STEM Education, 5(1), 3. MedlineGoogle Scholar
  • Reinholz, D. L., Corbo, J. C., Dancy, M., & Finkelstein, N. (2017). Departmental action teams: Supporting faculty learning through departmental change. Learning Communities Journal, 9, 5–32. Google Scholar
  • Reinholz, D. L., Ngai, C., Quan, G., Pilgrim, M. E., Corbo, J. C., & Finkelstein, N. (2019). Fostering sustainable improvements in science education: An analysis through four frames. Science Education, 103(5), 1125–1150. https://doi.org/10.1002/sce.21526 Google Scholar
  • Schein, E. H. (2010). Organizational culture and leadership. San Francisco, CA: Jossey-Bass. Google Scholar
  • Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. Journal of Educational Research, 99(6), 323–338. Google Scholar
  • Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. New York: Random House. Google Scholar
  • Smith, J., diSessa, A. A., & Roschelle, J. (1994). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of the Learning Sciences, 3(2), 115–163. Google Scholar
  • Walter, E. M., Beach, A. L., Henderson, C., & Williams, C. T. (2015). Describing instructional practice and climate: Two new instruments. In Weaver, G. C.Burgess, W. D.Childress, A. L.Slakey, L. (Eds.), Transforming institutions: Undergraduate STEM education for the 21st century. West Lafayette, IN: Purdue University Press. Google Scholar
  • Yackel, E., & Cobb, P. (1996). Sociomathematical norms, argumentation, and autonomy in mathematics. Journal for Research in Mathematics Education, 27(4), 458–477. Google Scholar
  • Zeichner, K. M., & Noffke, S. E. (2001). Practitioner research. In Richardson, V. (Ed.), Handbook of research on teaching (pp. 298–332). Washington, DC: American Educational Research Association. Google Scholar
  • Zwickl, B., Finkelstein, N., & Lewandowski, H. (2012). Development and validation of the Colorado Learning Attitudes about Science Survey for Experimental Physics. Paper presented at: Physics Education Research Conference 2012 (Philadelphia, PA). Retrieved December 9, 2019, from www.compadre.org/Repository/document/ServeFile.cfm?ID=12745&DocID=3286 Google Scholar