Policies and Practices to Support Undergraduate Teaching Improvement

Teaching Improvement Initiatives in U.S. Higher Education

Back to table of contents
Aaron M. Pallas, Anna Neumann, and Corbin M. Campbell
Commission on the Future of Undergraduate Education

As we have noted, there is growing pressure on the U.S. postsecondary education system to improve the quality of education provided to students. But many improvement efforts, enacted at an abstract policy level, remain distant from the day-to-day teaching and learning within college classrooms. Such efforts rarely reach into teaching practice. Conversely, other efforts that originate on or are attuned to local campuses have sought to develop and assess instructional practices that college teachers use with their students. But many of these efforts emphasize the development of general pedagogical knowledge. Only a handful have seriously tackled the challenge of developing faculty members’ pedagogical content knowledge. These rare initiatives have, to date, made only preliminary strides in this direction.

We describe six such examples of the improvement of teaching practice, four internal to a campus, and two external to it, and varying in their emphasis on pedagogical content knowledge and general pedagogical knowledge. These are not intended to be exemplars; rather they illustrate the challenges and potential of systematic undergraduate teaching improvement initiatives. Where appropriate, we comment on their strengths and weaknesses using the approaches to good teaching practice that we presented above as a guide.

Internal Teaching Improvement Initiatives

Teaching improvement on the modern-day American campus is associated usually with teaching centers, faculty mentoring programs, and instructors working alone, or sometimes with others, to improve their teaching reflectively.

Teaching Centers

Teaching centers offer resources and support to faculty seeking to improve their teaching; they are typically constituted as formal (budgeted) organizational units staffed with members of the faculty development profession. Though data on the total number of teaching centers in existence today are spotty,32 many U.S. institutions of higher education currently lay claim to one and sometimes to several teaching centers. They go by various names—teaching and learning centers, faculty development centers, institutes for improving teaching and learning, teaching excellence centers, and so on—and can be found in all types of institutions, regardless of control or mission. What counts as a teaching center seems to vary greatly—from simple closets to vast and lavishly appointed mega-libraries linked electronically to resources around and beyond a campus. Although some institutions have developed specialized discipline-based teaching improvement centers,33 we focus primarily on campus-wide centers attending to the needs of faculty who teach undergraduates. The largest of these centers are well used by faculty and richly staffed with specialists and other support staff; they also serve as exemplars to the emergent field of faculty development.34 Yet even the most used and most richly stocked teaching centers are limited in their offerings, given the predefined bandwidth of what teaching improvement stands for on their campuses. We learned that the centers often adopt new sources quickly, building library-like checkout or access systems for faculty on campus, thereby broadening their offerings in response to local need and interest. Though budgeted largely through their institutions’ operating funds, teaching centers may include externally funded efforts. Most teaching centers, and their staffs of faculty developers, affiliate with the Professional Organization Development Network (POD), viewing it as a source of substantive support and professional legitimation, on and off campus.

Our review of pertinent writings and of the websites of several centers suggests that historically, they have emphasized “development” around what it means to be a faculty member and to carry out faculty work, broadly defined (e.g., interacting with students, academic leadership duties, attending to campus imperatives). As the faculty role has changed over time, so have center emphases and offerings.35 In attending to classroom-based teaching improvement, the centers have focused largely on faculty members’ development of general pedagogical knowledge: the teaching expertise that generalizes across all/most disciplines and subject matters (e.g., class management, educational technologies, use of student groups). The centers appear to have shied away from the development of faculty members’ pedagogical content knowledge. In researching this, we gleaned that on some campuses, center staff may be exploring the possibility of redirecting center efforts toward service to “faculty collectivities” (e.g., the collective faculty of an academic program versus individuals).36

Clearly, teaching centers are significant loci of professional development and teaching support in higher education. However, rigorous studies of their effects on faculty members’ teaching appear nonexistent—or if these do exist, they are not publicly posted.37 Given the sizable institutional resources currently devoted to teaching centers, better data on their offerings, operations, and effects could usefully contribute to teaching improvement.38

Moreover, teaching centers to date have not served as sites for exploring key questions—such as what counts as good teaching in different disciplines and fields and for particular populations (in the spirit of pedagogical content knowledge and related models), or how faculty learn to teach. Yet the very existence of these centers and the massive support they have garnered, especially in larger institutions, suggest that they could orient their work toward these critical issues.

Mentoring Programs

A Google search of “faculty mentoring” yields lots of hits, with many at the top of the list featuring campus-based programs within which senior faculty mentor early-career professors, or peers mentor one another. But what it means for faculty to mentor one another, how it bears on undergraduate teaching, and whether mentoring pays off in helping novice faculty become effective teachers are difficult to discern. Our review suggests that most faculty mentoring programs do not pointedly address teaching improvement, attending instead to the broader array of faculty work in research, teaching, and service.39

To be sure, faculty mentoring, as practiced in many institutions, is viewed as experienced and/or expert faculty guiding novices, and this can extend to teaching practice. But who is eligible for such a mentoring relationship, what is the purpose, and how does mentoring proceed are typically not well-specified. In many institutions, faculty mentoring may be more admired than implemented. As institutions seek to demonstrate their legitimacy to external stakeholders, it is little wonder that faculty mentoring gets featured on institutional websites.

Beyond the ambiguities of “mentoring of what?” and “for whose benefit?” we find a relatively weak research base for faculty mentoring other faculty in higher education. A wide-ranging review of research on mentoring in business and higher education by Darlene Zellers, Valerie Howard, and Maureen Barcic prefigures our assessment. Pointing out the range of “vibrant faculty mentoring programs” identifiable on the websites of leading American university campuses (e.g., Iowa State University, University of Wisconsin, Indiana University, and Stanford University, among others), the study’s authors registered surprise that “little scholarship is being generated and/or disseminated about these model programs.”40 This is as true of less prestigious institutions as it is of these research-intensive universities.41

Mentoring programs such as these do give some attention to teaching, but they largely elide key issues such as how an instructor might go about identifying key subject matter ideas to be taught in an introductory class; how students will likely make sense initially of core disciplinary ideas; how students will experience instructors’ approaches to addressing their prior knowledge (framed either as assets or misconceptions); and how to select and deploy subject-matter representations likely to advance students’ disciplinary understanding. As we note elsewhere in this paper, campus reward structures rarely orient novice faculty or their mentors to the nature or quality of undergraduate teaching, and campuses offer few explicit opportunities for faculty to discuss such questions relative to their own discipline. It is thus disappointing but not surprising that researchers and leaders have paid so little attention to faculty mentoring as a mechanism for undergraduate teaching improvement, especially given its potential to develop general pedagogical knowledge and to cultivate pedagogical content knowledge as foundations for teaching practice.

Guided Reflection Programs

Students are not the only learners in higher education; faculty learn too. Payoffs of faculty learning include improved performance as teachers and scholars; and modeling learning as a type of professional expertise for students’ benefit. Although more needs to be understood about college instructors’ learning, research suggests that one process, in particular, is key: that is, reflection, defined as instructors probing their own thoughts about teaching—before, during, or after engaging in it.42

But what kinds of things may college and university faculty learn while teaching? Research has shown that faculty learn about teaching, subject matter, and students’ learning as they teach, and as they reflect on that teaching both before and after instruction.43 Yet we have little research-based knowledge about what inspires and supports such reflection, how it is manifest, how it unfolds, and how it impacts students’ learning.

Eric Mazur and his colleagues’ development of the peer-instructional model, including a protocol for guiding teachers’ analyses of students’ thinking in response to brief segments of instruction, stands as an exception. Developed from within Mazur’s and others’ own teaching, the model requires an instructor to offer brief instruction around a meaningful unit of subject matter (e.g., an aspect of a physics concept), collect data on students’ understanding of the unit, then analyze the data toward on-the-spot decisions about optimal next steps. The cycle then repeats with new content and instruction. A single class session may include several cycles. The peer-instruction model structures teachers’ reflection on their students’ subject-matter thinking and their own decision-making with regard to next instructional moves.44 Research on the impact of the peer instruction model on students’ understandings of physics concepts indicates significant positive effects.45

Mazur’s peer-interaction model focuses heavily on what Donald Schon calls “reflection in action.”46 Seeking to strengthen some oft-ignored phases of teaching, especially planning and post-hoc reflection, other researchers call for the production and curating of artifacts of students’ semester-long learning in subject-matter classes. The resulting learning portfolios array data on students’ thinking in response to instruction, enabling teachers to reflect on the assembled data and to channel insights into plans for their future teaching. Still other researchers focus on instructors’ assembling of teaching portfolios featuring their instructional actions and related insights; these portfolios also are believed to stimulate instructors’ reflection on teaching.47 Vivid examples of how portfolios, and related tools, can contribute to postsecondary teaching improvement are evident in the work of scholars associated with the Carnegie Academy of the Scholarship of Teaching and Learning (CASTL), which we discuss in the next section.

Although “reflection in action” is often a solo activity, we believe that guided reflection may contribute to future undergraduate teaching improvement efforts. Subject-specific efforts such as those piloted by Eric Mazur, and by proponents of teaching portfolios, may extend beyond general pedagogical knowledge into the pedagogical content knowledge we deem so central.

The Science Education Initiative

Although the three preceding teaching improvement initiatives have focused on college teachers, with little attention to organizational context, the Science Education Initiative (SEI) addresses both. Physicist Carl Wieman has recently recounted the history of the SEI, an effort to improve the teaching of science at the University of Colorado (CU) and the University of British Columbia (UBC) over roughly the past decade.48 He oversaw a competitive grants program to six science departments in each institution, awarding approximately $1 million per department over a five- or six-year period (around $5 million at CU and $10 million at UBC).

Wieman recognized that the formal incentive system in each institution was the primary barrier to change. He believed that a competitive grants program with department-level awards of a sizeable amount could transform science teaching. The department was the key unit of change, as courses are lodged in departments. The change mechanism was what he referred to as Science Education Specialists (SESs): postdoctoral fellows with disciplinary knowledge and teaching expertise who were hired by, and embedded in, the academic departments. Working with the SESs, faculty could examine what students should be learning in a particular course; what they were actually learning; and what research-based instructional practices could promote the desired learning. Although implementation of the SEI was uneven across departments, Wieman found evidence that hundreds of thousands of credit hours in science classes each year were taught differently—i.e., using practices drawn from the learning sciences—due to the initiative.

Wieman’s book is full of insights, some specific to the SEI and others applying more broadly to teaching improvement efforts. Although the initial focus was on transforming courses, he came to understand that transforming faculty was more appropriate; some faculty would vigorously resist curricular and instructional change, seeing it as a threat to their professional identities, and it made more sense to work with those faculty who were more receptive to innovation. Wieman also concluded that he had underestimated the importance of direct incentives to faculty for engaging in course transformation, such as course releases, extra teaching assistants, or partial support for a research assistant. The incentives, he mused, needed to be substantial enough that the threat of losing them would spur compliance with the initiative’s goals.

Finally, Wieman saw the overall quality of management and organization within the institution and its departments as the primary determinant of whether the SEI was well-implemented in a department. Change was unlikely to occur unless actors at every institutional level—system heads and policy leaders, college presidents and senior administrators, and department chairs and individual faculty—came to understand good undergraduate teaching as subject-matter driven, student-knowledge driven, and research-and-assessment driven. Some departmental cultures, especially those that downplayed collective activity, reduced the likelihood of success. Conversely, some department chairs took strategic action to support the SEI, such as reassuring junior faculty that poor course evaluations in an early iteration of a transformed course would not be held against them.

Wieman remains optimistic, as do we, about the potential of research-based teaching improvement initiatives such as the SEI to improve undergraduate teaching, in the sciences and other fields. There are some warning signs, however, even beyond the significant learning curve that exists for faculty to establish learning goals, document student thinking, develop instructional materials, and assess their impact. The sustainability of course transformation at the department level in the absence of substantial external incentives is an unknown, and in several of the funded departments at CU and UBC, fewer than 50 percent of the faculty participated in the initiative. Moreover, even a well-funded initiative such as SEI was poorly aligned with campus-level policies and practices regarding faculty promotion and tenure and the level of central support for academic departments.

We see the lessons of the SEI about the importance of braiding teaching improvement at the individual instructor level with teaching improvement at the department level, both driven by learning sciences research, as especially useful for future undergraduate teaching improvement initiatives, whether in the sciences or other fields of study.

External Teaching Improvement Initiatives

Though campus-supported teaching centers and mentoring programs are conveniently positioned to guide instructional improvement on faculty members’ home campuses, external organizations and communities supporting individual faculty members in undergraduate teaching improvement have also arisen. Below, we review two of the most promising examples: the Carnegie Academy for the Scholarship of Teaching and Learning (CASTL) and the Discipline-Based Education Research (DBER) community.

The Carnegie Academy for the Scholarship of Teaching and Learning (CASTL)

In 1998, Lee Shulman, president of the Carnegie Foundation for the Advancement of Teaching, established CASTL as a route to higher education teaching improvement. One may interpret Shulman’s leadership as an effort to render useable the scholarship of teaching and learning (SOTL) that his predecessor, Ernest Boyer, had broadly conceptualized.49

Boyer’s SOTL and Shulman’s pedagogical content knowledge complemented one other. SOTL theorized college teaching as a form of scholarship rooted in faculty members’ disciplinary knowledge, whereas Shulman’s pedagogical content knowledge offered teachers ways to use their disciplinary expertise in classrooms to support student learning. Thus, SOTL spoke to faculty scholarly values and career aspirations; pedagogical content knowledge offered tools for teaching improvement. Combined, the two concepts yielded a vision of teaching as public, and thereby open to colleagues’ review and use; subject to the oversight of scholar-teachers with deep understanding of subject matter; and shared within a community of subject-matter teaching experts and novices (i.e., scholarship as “community property”).50 CASTL could bring this vision of a link between college teaching practice and its improvement to life.

CASTL involved three sets of actors: disciplinary teacher-scholars (CASTL Scholars), disciplinary and professional associations, and college and university campuses. According to Carnegie researchers Pat Hutchings, Mary Taylor Huber, and Anthony Ciccone, CASTL sought to “build a critical mass of scholars of teaching and learning whose work would show what was possible [in and through college teaching], illustrate the diverse shapes and forms that SOTL could take, and serve as models for work by others.”51 Given its roots in pedagogical content knowledge, alongside broader insights on teaching as a form of faculty work in contemporary higher education, CASTL left few instructional stones unturned.

The CASTL Scholars program was designed as a ten-day summer residency during which CASTL Scholars clarified their plans for a teaching-learning project. This was followed by a second ten-day residency a year later during which the Scholars presented project results, implications, and plans for advancing their efforts and sharing them with others. CASTL Scholars also participated in an interim winter meeting to address opportunities and challenges in their work.

A distinctive feature of CASTL is that it was well-studied. A series of monographs documented its aims and concrete products—the latter, through case portrayals of classroom practice.52 A final survey of the CASTL Scholars painted a detailed picture of participants’ learning from immersion in the program.53 In providing exemplars of CASTL Scholars learning to engage in continuous teaching improvement, documenting implementation processes, and revealing changes in participants’ teaching orientations and practices, this body of research served as a proof of concept while laying the groundwork for a pedagogical content knowledge of higher education.54 CASTL’s results were promising and unfolded over a decade of intensive activity among 158 CASTL Scholars organized in six cohorts.55 What remains unclear is the extent to which CASTL Scholars shared their learning with campus colleagues, and what campus-based mechanisms facilitated such exchanges.56

Three features of CASTL bear particular attention in light of our exposition of the contributions of the learning sciences to good teaching: One, CASTL’s explicit focus on disciplinary knowledge, including how this emphasis bridges to a view of teaching as scholarship; two, the accountable consistency with which the SOTL/pedagogical content knowledge vision was threaded through CASTL Scholars’ work; and three, continuing formative assessment as a key to individuals’ and collective learning. It is noteworthy that CASTL’s perspective on teaching and teaching improvement is closely aligned with that guiding Carl Wieman’s SEI, which also is an example of Discipline-Based Education Research (DBER), which we discuss next.

Discipline-Based Education Research (DBER) Community

According to a 2012 report by the National Research Council’s (NRC) Committee on the Status, Contributions, and Future Directions of Discipline-Based Education Research, DBER seeks to “combine the expertise of scientists and engineers with methods and theories that explain learning.”57 With support from the National Science Foundation, DBER scholars seek to understand and improve undergraduate students’ learning and instructors’ teaching of science and engineering in ways that reflect each “discipline’s priorities, worldview, knowledge, and practices.”58 DBER’s goals are to:

  • understand how people learn the concepts, practices, and ways of thinking of science and engineering;
  • understand the nature and development of expertise in a discipline;
  • help identify and measure appropriate learning objectives and instructional approaches that advance students toward those objectives;
  • contribute to the knowledge base in a way that can guide the translation of DBER findings to classroom practice; and
  • identify approaches to make science and engineering education broad and inclusive.59

As an emerging professional movement, DBER reflects some of the strengths we have previously identified in CASTL and the SEI. First, the goals of DBER align well with the view of good teaching promoted by the learning sciences, and especially practice-based research on the teaching and learning of disciplinary subjects. Second, perhaps by virtue of their scientific training, DBER scholars value assessment and research; DBER-derived findings have significant credibility. Third, DBER’s bottom-up quality, originating within the instructional experiences of teacher-researchers, suggests that these individuals’ research questions will go to the heart of their teaching practices. Fourth, DBER’s anchoring in practice also may account for its rapid spread across the country—onto campuses, into some professional associations, and, possibly, into the faculty staffing patterns of some undergraduate science programs.60 Fifth, faculty members’ voluntary and seemingly uncompensated contributions to DBER signal its sustainability.

As expected, questions and challenges remain. First, we identified no extant efforts to examine the quality of DBER products (e.g., research reports, documented pedagogical improvements, etc.), especially their grounding in current research-based conceptions of teaching and learning. Although the publications and websites of prominent agencies (e.g., NRC, National Academies) proclaim such an alignment, it will be important to assess the extent to which aspirations match up with reality. Second, we cannot discern the amount or quality of interaction between the learning sciences community and the DBER community. DBER’s impact on undergraduate teaching will be greatest if the two communities collaborate and learn from one another.


32. Our examination of key higher education data bases (e.g., HERI, COACHE, IPEDS) sheds no light on the number of colleges and universities claiming at least one. While helpful, information on teaching center activity on several hundred campuses, compiled by the Professional Organizational Development (POD) network, does not include a full population listing (http://podnetwork.org/publications/google-custom-search-of-center-web-sites/).

33. For example, serving the faculty of a school of business, a medical school, a division of arts and sciences, etc.

34. Examples include centers at the University of Michigan, University of Texas, Vanderbilt, and Carnegie Mellon.

35. Mary Deane Sorcinelli et al., Creating the Future of Faculty Development (Bolton, MA: Anker Publishing Company, 2006); Andrea L Beach et al., Faculty Development in the Age of Evidence (Sterling, VA: Stylus Publishing, 2016).

36. We base these claims on a limited review of the websites of selected “leading light” centers in different kinds of institutions (four major universities, two four-year colleges, one community college), informal conversations with center staff, websites of professional associations, and numerous research reports (herein cited).

37. We have learned that at least one major university, with an exemplary teaching center, reviews that center through its multi-year program review cycle. We did not have access to that university’s review report and thus could not investigate what was learned. We do not know how widespread such practices are.

38. Though listing their services and resources, most of the centers we explored do not appear to document and analyze the teaching and professional interactions at their core. We identified no large-scale center assessments. Nor are we aware of theory-driven, in-depth research on their activities and outcomes. This is surprising given the proliferation of centers, their increased holdings and services, expanding staffs, and direct costs, some running well over a million dollars a year.

39. We conducted a Web of Science search for social sciences reports (scholarly books and peer-reviewed articles) on faculty mentoring, published between 2000–2016; our search yielded 110 sources. Close examination of these sources indicated virtually no attention to teaching improvement. Instead sources focused on mentoring as general support, especially for junior faculty seeking to learn the full range of responsibilities required of them; implications for faculty commitment and retention; women in the sciences as responsive to mentoring; and mixed responses to mentoring by faculty of color and needs to adjust the process.

40. Darlene Zellers, Valerie Howard, and Maureen Barcic, “Faculty Mentoring Programs: Reenvisioning Rather Than Reinventing the Wheel,” Review of Educational Research 78 (3) (September 1, 2008): 580.

41. Credible research designs for assessing the impact of mentoring initiatives are hard to construct, given small participant samples, reliance on diffuse outcome measures, heavy reliance on self-reports, and tendencies toward self-selection into programs.

42. Donald Schon, Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions (San Francisco: Jossey-Bass, 1987); Donald Schon, The Reflective Practitioner (New York: Basic Books, 1983). We refer to initiatives focusing on faculty reflection as guided reflection programs, although the notion of “guidance” underlying such initiatives is diffuse, as reflection can be solo or occur amidst others, and learning from reflection may reside primarily in one faculty member or be shared among colleagues.

43. Deborah Ball, “With an Eye on the Mathematical Horizon: Dilemmas of Teaching Elementary School Mathematics,” The Elementary School Journal 93 (4) (1993): 373–397; Ruth Heaton and Magdalene Lampert, “Learning to Hear Voices: Inventing a New Pedagogy of Teacher Education,” in Teaching for Understanding: Challenges for Policy and Practice, ed. David Cohen, Milbrey McLaughlin, and Joan Talbert (San Francisco: Jossey-Bass, 1993), 43–83; Schon, Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions; Schon, The Reflective Practitioner; Anna Neumann, Professing to Learn (Baltimore: The Johns Hopkins University Press, 2009).

44. Eric Mazur, Peer Instruction: A User’s Manual, 1st ed. (Upper Saddle River, NJ: Prentice Hall, 1997); Julie Schell, ed., What Is Peer Instruction…in 2 Mins, https://blog.peerinstruction.net/2014/05/01/what-is-peer-instruction-in-2-mins (accessed 2017).

45. Catherine H. Crouch and Eric Mazur, “Peer Instruction: Ten Years of Experience and Results,” American Journal of Physics 69 (9) (2001): 970–977; Mercedes Lorenzo, Catherine H. Crouch, and Eric Mazur, “Reducing the Gender Gap in the Physics Classroom,” American Journal of Physics 74 (2) (2006): 118–122; Nathaniel Lasry, Eric Mazur, and Jessica Watkins, “Peer Instruction: From Harvard to the Two-Year College,” American Journal of Physics 76 (11) (2008): 1066–1069.

46. Schon, Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions; Schon, The Reflective Practitioner.

47. Nona Lyons, With Portfolio in Hand: Validating the New Teacher Professionalism (New York: Teachers College Press, 1999); Val Klenowski, Sue Askew, and Eileen Carnell, “Portfolios for Learning, Assessment and Professional Development in Higher Education,” Assessment & Evaluation in Higher Education 31 (3) (2006): 267–286.

48. Carl Wieman, Improving How Universities Teach Science: Lessons from The Science Education Initiative (Cambridge, MA: Harvard University Press, 2017).

49. Ernest L. Boyer, Scholarship Reconsidered: Priorities of the Professoriate (San Francisco: Jossey-Bass, 1990). According to Boyer, four scholarships frame faculty work and careers: discovery, integration, teaching, and application.

50. Ibid.; Lee Shulman, The Wisdom of Practice: Essays on Learning, Teaching, and Learning to Teach (San Francisco, CA: Jossey-Bass, 2004).

51. Pat Hutchings, Mary Taylor Huber, and Anthony Ciccone, The Scholarship of Teaching and Learning Reconsidered (San Francisco, CA: Jossey-Bass, 2011), 153–154.

52. Mary Taylor Huber, Balancing Acts: The Scholarship of Teaching and Learning in Academic Careers (Washington, D.C.: American Association for Higher Education, 2004); Mary Taylor Huber and Pat Hutchings, The Advancement of Learning: Building the Teaching Commons (San Francisco, CA: Jossey-Bass, 2005); Hutchings, Huber, and Ciccone, The Scholarship of Teaching and Learning Reconsidered. Through the life of CASTL, the Carnegie Foundation website featured an extensive gallery of portfolios, developed by CASTL Scholars and featuring subject-matter presentations, tools, and course materials. With thanks to Gary Otake for access on December 15, 2016, to an inactive site: https://mail.google.com/mail/u/0/#search/Carnegie/15904a40057538c4.

53. Rebecca Cox, Mary Taylor Huber, and Pat Hutchings, “Survey of CASTL Scholars,” in The Advancement of Learning: Building the Teaching Commons, ed. Mary Taylor Huber and Pat Hutchings (San Francisco, CA: Jossey-Bass, 2005), 135–149.

54. We note, though, that CASTL’s exemplary attention to subject matter may have overshadowed the role of students’ prior cultural knowledge in conceptualizing a pedagogical content knowledge of higher education.

55. Hutchings, Huber, and Ciccone, The Scholarship of Teaching and Learning Reconsidered.

56. For a similar analysis, see Steven Brint, “Focus on the Classroom: Movements to Reform College Teaching and Learning, 1980–2008,” in The American Academic Profession: Transformation in Contemporary Higher Education, ed. Joseph Hermanowicz (Baltimore, MD: The Johns Hopkins University Press, 2011).

57. Susan Singer, Natalie Nielsen, and Heidi Schweingruber, Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering (Washington, D.C.: National Academies Press, 2012), 1.

58. Ibid.

59. Ibid, 9.

60. The first thirty hits of a Google search on DBER yielded notices of DBER interest group meetings on several university campuses (University of Nebraska, University of Colorado, MSU, RIT, George Mason University) and two professional associations (National Association of Geosciences Teachers) along with a job description for a tenure-track assistant professor in “Earth, Ocean or Environment Discipline-Based Education Research (DBER).”