ÇďżűĘÓƵ

Policies and Practices to Support Undergraduate Teaching Improvement

The Current Institutional Context

Back to table of contents
Authors
Aaron M. Pallas, Anna Neumann, and Corbin M. Campbell
Project
Commission on the Future of Undergraduate Education

The social, economic, and political forces framing contemporary higher education in the United States have largely discouraged undergraduate teaching improvement, rather than supported it. We note three trends: institutional competition for resources; the rise of public accountability systems; and changing definitions of scholarship in the academy.

First, institutions of higher education compete with one another for goods that have little bearing on the quality of teaching and learning: namely, prestige, legitimacy, dollars, and students.2 The pattern is rampant and deeply engrained in the American higher education system. Even the neutral Carnegie institutional classifications can detract from attention to teaching, as institutions strive to position themselves with increasingly prestigious (i.e., research-intensive) peers, or to “move up” in the rankings in their own classification. Efforts to respond to the institutional rankings criteria, or to appeal to a mass public, may draw time, attention, and resources away from faculty teaching.3 When faculty reward structures and professional development opportunities emphasize securing external grants, many faculty will follow suit; and since time is not infinitely expansible, the more time that faculty spend on research activities will likely result in less time spent thinking about teaching.

Further, rankings and other public measures of institutional effectiveness are sites for institutional competition and “gaming the system.” Years of research have demonstrated that college rankings (such as U.S. News & World Report) privilege the incoming characteristics of students over education practices or student outcomes.4 Higher education scholars and foundations hypothesized that changing the bases for the rankings and/or providing additional information about teaching and learning to the public might incentivize institutions to focus more on teaching and learning than on the incoming characteristics of students. But there is little evidence to date that changes in the data made available to the public—admittedly primitive and difficult to make sense of—have had this effect.

Higher education scholars and sociologists have noted the ways in which rankings and other accountability measures evoke changes in institutional behavior, often unintended, in response to being rated or evaluated.5 For example, institutions of higher education that strive to move up in the rankings have focused on recruiting more applicants each year while admitting the same number, on increasing research expenditures and spending on administration, and on hiring faculty who are experts and promoting them based on their research prowess.6 Conversely, such institutions also decrease behaviors that do not garner status or count toward the ratings to which they attend, such as admitting a broad spectrum of students, emphasizing teaching in the campus reward structure, and increasing instructional expenditures. Institutions mimic behaviors that are rewarded in the prestige hierarchy (i.e., admissions selectivity, research productivity), and dissociate from behaviors that are unrewarded (e.g., teaching quality).7 Since the current generation of college ratings does not address teaching quality or student learning outcomes, it is not surprising that the ratings do not drive institutions to attend to undergraduate teaching improvement.

Second, government has taken a more active role in developing public accountability systems for institutions receiving public funds, even in the form of student loans. But accountability focuses the attention of policy-makers and institutional leaders on outcomes as markers of institutional success, with much less attention to the educating processes that produce these outcomes.

The increase in accountability practices has largely been driven by calls for transparency, efficiency, and return on investment. One clear example of this practice, and its application to teaching and learning in higher education, was the Spellings Commission on the Future of Higher Education, so named for Margaret Spellings, Secretary of Education under President George W. Bush. The Commission’s 2006 report called for institutions of higher education to document the “value-added” to students in the form of learning outcomes in a “consumer-friendly” way.8 Coming on the recent passage of the No Child Left Behind Act in 2002, which mandated virtually universal testing of students in grades three through eight in English and mathematics, institutions of higher education were very concerned about a broad federal mandate for parallel testing in postsecondary institutions.9

Although no such mandate emerged from the Spellings Commission’s recommendations, the consequences of the emphasis on value and transparency rippled across higher education institutions and trickled down into faculty life. The regional accreditors charged by the U.S. Department of Education accelerated a shift in their philosophy and standards toward what became known as “outcomes-based accreditation,” a model that obliged institutions to define desired student and organizational outcomes (such as student learning outcomes) and to demonstrate a continuous quality improvement mechanism in which measured outcomes would drive changes in institutional policies and practices.10 The accountability movement led to an increase in standardized institutional assessments of student engagement and learning, such as the National Survey of Student Engagement (NSSE), the Collegiate Learning Assessment (CLA), and the American Association of Colleges and Universities’ (AAC&U) Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics.

Generalized assessments such as these appear to shift institutional attention toward student engagement and learning. However, little is known about whether student learning does actually improve at the institution level in response to these accountability efforts. Institutions that adopt such assessments report increased faculty understanding of assessment,11 but there is little evidence that student learning increases or improves as a result. Likewise, there is little attention to college teaching in these assessments and improvement mechanisms.

Undergirding the student learning assessment movement and its counterpart, outcomes-based accountability, is the assumption that a focus on the student experience and student learning will reinforce and improve the educational practices at the institution. Assessment has been used in strategic planning, increasing student engagement, developing databases to inform institutional decision-making, enhancing faculty collaboration, and aligning curricula. Yet outcomes-based accountability has not sought to understand teaching in classrooms or the connections between teaching and the desired learning outcomes. Rather, there is a broad but generic notion that data on student learning outcomes might be examined via a feedback process that redirects faculty and administrators’ attention to curriculum and teaching practice, but with little guidance on how specifically to improve teaching.

The accountability movement has, we believe, encouraged data-based decision-making in higher education, what is sometimes referred to as a “culture of evidence” in organizational decision-making.12 A culture of evidence is a culture “in which colleagues from varied disciplinary contexts and roles (including student affairs) share information and judgments about what is and isn’t working and commit as a community to ongoing improvement.”13 What is particularly notable about this definition is that the focus is on the process of assessing—without much regard for the content of what is being assessed (e.g., teaching, learning, etc.). In recent years, the culture of evidence has been associated with student learning assessment, but the attention is on the institutional commitment to the assessment process (collecting and using data as evidence to guide practices) rather than on undergraduate teaching and teaching improvement.

Third, over the past three decades, there has been a systematic effort to redefine scholarship in the academy, pushing it increasingly to encompass teaching. But this work has not reached deeply enough into teaching practice to make a lasting difference. Ernest Boyer’s seminal work for the Carnegie Foundation for the Advancement of Teaching, Scholarship Reconsidered: Priorities of the Professoriate, sought to expand what counts as scholarship. Recalling Aristotle, Boyer remarked, “teaching is the highest form of understanding,” as he attempted to elevate teaching from the lowest common denominator among faculty to a fundamental and revered form of scholarship. Boyer’s report caused a significant ripple in the field, with faculty and administrators seeking to integrate these ideas into academic discussions in institutions across the nation.14 Many institutions revised their tenure, promotion, and merit reward structures to include forms of scholarship beyond basic research, such as the scholarship of teaching (i.e., the study of one’s own teaching practice, and that of others).15 Five years after Boyer’s report, almost half of faculty responding to a national survey stated that there was a greater emphasis on teaching in their institutions and roles than before the report.16

In spite of Boyer’s symbolic elevation of the importance of college teaching, there is little evidence of a fundamental restructuring of faculty reward systems in the wake of the movement he initiated. Although teaching frequently is institutionalized as a regular (and measurable) part of faculty workload, the campus values and assumptions supporting college teaching are often tacit.17 Virtually all full-time faculty can describe their work in terms of their teaching “load”—a term that by its nature connotes a burden—but not in terms of teaching’s qualities, or its value to the institution and its students. In many institutions, teaching remains a “second among equals”—overshadowed by research productivity, though typically of greater importance than service.18

There is one additional feature of contemporary higher education worthy of note: Technological change and its potential to transform college teaching and learning and the professional development of college teachers. Increasingly, technology can mediate the relationships among teachers, learners, and subject matter, in the form of online classes and “flipped” classrooms, to name but two increasingly salient innovations. Some observers are convinced that technological change will fundamentally disrupt and alter existing institutional arrangements;19 others, drawing on the history of technological change in K-12 schools, are more skeptical about that possibility.20 We acknowledge the potential for technological change to reconstruct the college classroom, but it is not a central focus of our analysis.

This overview summarizes evidence that the external environments of colleges and universities shape their internal cultures, norms, and practices, which in turn influence faculty work priorities, experiences, and learning.21 But attention to high-quality teaching and learning is largely absent here. Changes to institutional decision-making and reward structures can, in some cases, turn faculty attention to teaching, motivating them to teach more, and altering their priorities among research, teaching, and service. What these processes cannot do, however, is alter the content and quality of undergraduate teaching. Only the faculty who are charged with teaching can do this.

We note as well that even for institutions with prominent undergraduate teaching missions, what counts as high-quality teaching is not at all clear. But we believe that these two aims—meaningful improvement in undergraduate teaching and making it an organizational goal—are attainable. We address these concerns in the next two sections of the paper, first by responding to the key question of “What is good teaching?” and then by examining six cases of attempts to improve classroom teaching.


ENDNOTES

2. Mitchell Stevens, Creating a Class: College Admissions and the Education of Elites (Cambridge, MA: Harvard University Press, 2009).

3. Kerry Ann O’Meara, “Striving for What? Exploring the Pursuit of Prestige,” in Higher Education: Handbook of Theory and Research, ed. John C. Smart, 22nd ed. (New York: Springer International Publishing, 2007), 241–306; Christopher Morphew and Bruce Baker, “The Cost of Prestige: Do New Research Universities Incur Higher Administrative Costs?” The Review of Higher Education 27 (3) (2004): 365–384.

4. Morphew and Baker, “The Cost of Prestige: Do New Research Universities Incur Higher Administrative Costs?”; Gary Pike, “Measuring Quality: A Comparison of US News Rankings and NSSE Benchmarks,” Research in Higher Education 45 (2) (2004): 193–208; O’Meara, “Striving for What? Exploring the Pursuit of Prestige.”

5. Ibid.

6. Ronald G. Ehrenberg, “Reaching for the Brass Ring: the U.S. News and World Report Rankings and Competition,” The Review of Higher Education 26 (2) (2003): 145–162; Susan K. Gardner, “Keeping Up with the Joneses: Socialization and Culture in Doctoral Education at One Striving Institution,” The Journal of Higher Education 81 (6) (2010); Tatiana Melguizo and Myrah Strober, “Faculty Salaries and the Maximization of Prestige,” Research in Higher Education 48 (6) (2007): 633–668; Marc Meredith, “Why Do Universities Compete in the Ratings Game? An Empirical Analysis of the Effects of the U.S. News and World Report College Rankings,” Research in Higher Education 45 (5) (2004): 443–461.

7. Jerome Barkow et al., “Prestige and Culture: A Biosocial Interpretation,” Current Anthropology 16 (4) (1975): 553–572; Paul DiMaggio and Walter Powell, “The Iron Cage Revisited: Collective Rationality and Institutional Isomorphism in Organizational Fields,” American Sociological Review 48 (2) (1983): 147–160.

8. “A Test of Leadership: Charting the Future of U.S. Higher Education” (Washington, D.C.: U.S. Department of Education, September 22, 2006), .

9. Corbin M. Campbell, “Serving a Different Master: Assessing College Educational Quality for the Public,” in Higher Education: Handbook of Theory and Research, vol. 30, ed. Michael Paulsen (New York: Springer International Publishing, 2015), 525–579; Peter Ewell, “Assessment and Accountability in America Today: Background and Context,” in New Directions for Institutional Research (San Francisco, CA: Jossey-Bass, 2008), 7–17.

10. Balancing Competing Goods: Accreditation and Information to the Public About Quality (Washington, D.C.: Council for Higher Education Accreditation, 2004), .

11. Esther Hong Delaney, “The Professoriate in an Age of Assessment and Accountability: Understanding Faculty Response to Student Learning Outcomes Assessment and the Collegiate Learning Assessment,” Ph.D. diss., Columbia University, 2015.

12. Catherine Millet et al., A Culture of Evidence: An Evidence-Centered Approach to Accountability for Student Learning Outcomes (Princeton, NJ: Educational Testing Service, 2008).

13. Pat Hutchings, Mary Taylor Huber, and Anthony Ciccone, The Scholarship of Teaching and Learning Reconsidered (San Francisco, CA: Jossey-Bass, 2011).

14. Charles E. Glassick, Mary Taylor Huber, and Gene I. Maeroff, Scholarship Assessed: Evaluation of the Professoriate (San Francisco, CA: Jossey-Bass, 1997); Adrianna Kezar, “Higher Education Research at the Millennium: Still Trees Without Fruit?” The Review of Higher Education 4 (2000): 443–468; KerryAnn O’Meara, “Encouraging Multiple Forms of Scholarship in Faculty Reward Systems: Does It Make a Difference?” Research in Higher Education 46 (5) (2005): 479–510, doi:10.1007/s11162-005-3362-6.

15. Glassick, Huber, and Maeroff, Scholarship Assessed: Evaluation of the Professoriate; O’Meara, “Encouraging Multiple Forms of Scholarship in Faculty Reward Systems.”

16. Mary Taylor Huber, Balancing Acts: The Scholarship of Teaching and Learning in Academic Careers (Washington, D.C.: American Association for Higher Education, 2004).

17. John Braxton, William Luckey, and Patricia Helland, Institutionalizing a Broader View of Scholarship Through Boyer’s Four Domains (San Francisco, CA: Jossey-Bass, 2002).

18. We acknowledge that these hierarchies differ by institutional type, as a research-intensive university will have different values and reward structures than, say, an urban community college.

19. Kevin Carey, The End of College: Creating the Future of Learning and the University of Everywhere (New York: Riverhead Books, 2015); Ryan Craig, College Disrupted: The Great Unbundling of Higher Education (New York: St. Martin’s Press, 2015); Jeffrey J. Selingo, College (Un)Bound: The Future of Higher Education and What It Means for Students (Boston: New Harvest, 2013); Henry C. Lucas, Technology and the Disruption of Higher Education (Hackensack, NJ: World Scientific Publishing Company, 2016).

20. Larry Cuban, Oversold and Underused: Computers in the Classroom (Cambridge, MA: Harvard University Press, 2003); Karen J. Head, Disrupt This! MOOCs and the Promise of Technology (Lebanon, NH: University Press of New England, 2017); Susan M. Dynarski, “For Better Learning in College Lectures, Lay Down the Laptop and Pick Up a Pen” (Washington, D.C.: The Brookings Institution, August 10, 2017), .

21. Adrianna Kezar, Understanding and Facilitating Organizational Change in the 21st Century (San Francisco, CA: Jossey-Bass, 2011); Judith Gappa, Ann E. Austin, and Andrea G. Trice, Rethinking Faculty Work (San Francisco, CA: Jossey-Bass, 2007); KerryAnn O’Meara and Corbin M. Campbell, “Faculty Sense of Agency in Decisions About Work and Family,” The Review of Higher Education 34 (3) (2011): 447–476, doi:10.1353/rhe.2011.0000.