[go: up one dir, main page]

Skip to main content
kfitz

LG6: Values

Part 6 in this early draft of Leading Generously. This puts us a little over halfway through, so it seems a good moment to reiterate: what I most need in order to make this project into the thing it should become is examples. Stories of institutional transformation, both successful and failed, from a broad range of perspectives and institutions. Drop me a message at kfitz @ kfitz.info if you’d be willing to share yours with me. All such contributions can be fully anonymized or attributed as you prefer, and I’ll check my inclusion of them with you before publication.

Previously:

* * *

You are what you measure.

We live and work in a world that is deeply invested in assessment. We need at all times to know how we’re doing at both a program level and individually, whether we’re working adequately toward our goals, how our work compares both with our own expectations and with those around us. Whether we think of the situation in these particular businessy terms or not, we are all constantly evaluating our work with reference to a bunch of KPIs, or key performance indicators, metrics that someone, somewhere, has decided are relevant in thinking about effectiveness and productivity.

KPIs vary widely from domain to domain. In a library, the KPIs used to evaluate units and services might include numbers of patrons served, numbers of books checked out, numbers of articles retrieved, numbers of searches of the catalog, numbers of unfulfilled requests. In a college or department, the KPIs might include numbers of course sections that fill, numbers of students per section, numbers of students on waiting lists, numbers of majors, percentages of students who graduate within five years, and so on.

Individual faculty members are likewise asked to report on a range of KPIs, though they’re rarely given that label. For faculty, KPIs include numbers of publications, numbers of citations, numbers of presentations, average ratings on course evaluations, and more. And in some fields there are indexes that perform calculations on raw numbers in order to convert them into something more comparable, like the h-index.

KPIs, in other words, are the data we get assessed upon. These figures can be important to track, but like all metrics they boil an often complex story down into a set of numbers that can be used in comparisons that are often competitive in nature. And what those metrics often leave out is their purpose: Why are these the things we’re measuring? Why, in the larger picture of what we’re trying to accomplish, do they matter?

KPIs can be useful in that they can help us set goals: if we want to expand the reach of a community-oriented project, for instance, we might figure out how many people we’ve reached with that project and how many we’d like to reach in the coming year. Assessing our progress toward that goal can tell us something about the effectiveness of our outreach methods and, if we can drill down further into the data, we might be able to learn something about which outreach methods have been most effective.

But there are a lot of things that we can’t learn from standard, quantitative KPIs. We can’t really begin to understand why members of the communities we want to work with are engaging with us. And we certainly can’t understand why they aren’t. We can’t understand what the purpose of building engagement is, and whether we’re serving that purpose or merely growing a number. We can’t really measure the good that we’re doing based on metrics.

That is to say: Goals such as these are important, as is assessing our work relative to those goals. But the goals themselves are empty unless they are grounded in our deepest values, unless they speak directly to our purpose and mission. And likewise, the metrics we use to assess our progress will likewise be empty unless they include a full reckoning with those values.

It’s not a coincidence, after all, that the root of “evaluation” is “value.” Reflecting on the role that our values play in the goals we set and the ways we mark our progress toward them can help us refocus our work, and our assessment practices for that work, not on an abstracted set of KPIs but rather on the things that matter most to us. But how can we begin to develop a set of goals that are fully infused with the values that we bring to our work? How can we measure our progress toward those goals when neither the goals themselves nor the evidence of progress are numerically representable, but instead require deep reflection and narrative response?

The first step — obvious, perhaps, but not easy — is to begin by articulating the values that we bring to the work we do. Part of the challenge in this process lies in the pluralness of that “we.” We often assume, especially when we’re working in collective contexts, that our values are shared and that our terminology is as well. The process of articulating a set of shared values, however, can bring to the surface all of the different experiences and perspectives that different members of our communities bring to understanding the terms we use and the values they represent.

This was what the team behind the HuMetricsHSS initiative discovered early in their work. Their project is focused on developing a set of humane metrics for the humanities and social sciences, ways of thinking about evaluation that might allow scholars to focus in on the things that really matter to their work, rather than abstract, competitive, quantative goals of the KPI sort. The principal investigators on the HuMetrics team had worked together for some time on developing a shared language for talking about the things that matter most to them in academic work, and at an early workshop they brought that language into the discussion — only to find that the participants wanted to discuss, and even dispute, the language itself. This move could easily have been dismissed as being no more than a bunch of scholars quibbling about terms, but the team took the opportunity to refocus the workshop on those discussions, recognizing that what they were seeing was not mere resistance but rather the need that every community has to be able to shape and describe its own values, for its own purposes.

The process of articulating those values is of necessity a recursive one, and one that will likely never reach a fully finalized state. But connecting the naming and defining of values with the development of methods of evaluation is a necessary part of building the assessment systems that can support those values rather than working at cross purposes with them. This is especially true when the object of our assessment is people rather than programs: ensuring that we’re evaluating the right things requires us to think long and hard about what we value and why, and then to develop means of focusing in on those things that we value.

No doubt this sounds obvious: of course we should evaluate our work and our colleagues’ work based on the things that matter most to achieving our collective goals. The problem is that in many cases we’re still assessing the wrong things. We strive to be as objective as we can in our evaluation processes, with all the best intentions: we want to minimize the effects of bias by restricting our attention to things for which there is empirical evidence. And somehow we’ve decided that the most neutral form of empirical evidence is numerical. After all, some numbers are bigger than others, and all numbers can be ordered and compared.

But the result of this focus on the numerical is that what counts in our evaluative processes is too often boiled down to those things that we can count, as if those were identical usages of the same word rather than two parallel definitions. We focus in on our KPIs — serving x number of patrons; publishing y number of articles; raising z dollars in external funding — as if the numbers were the matter itself, rather than a means to an end.

This question of means and ends in personnel evaluations is being investigated by my colleagues in the College of Arts & Letters at MSU, including our dean, Christopher P. Long, and our associate deans, Cara Cilano, Sonja Fritzsche, and Bill Hart-Davidson. The ends, as they frame them, are about intellectual leadership: things like sharing knowledge within our communities, expanding opportunity for those around us, and stewardship of our institutions and our fields. Those are the goals, the things we strive for as we do the work. But the things we actually measure in faculty evaluations, for instance, are numbers and venues of publications, average student course evaluation ratings, key committee and field-based service roles. Those things are our KPIs, and they’re the means to an end, the ways we share knowledge, expand opportunity, and care for our institutions. But because these are the things we assess, they have a tendency to become ends in themselves, rather than remaining means: we value the publication as if it were the goal rather than a step along the way toward the goal. And worse: we have a tendency not to acknowledge other potential means (things like public-facing writing or community-engaged research) even when they help us better reach those desired ends.

As a result, Dean Long and his team have begun implementing modes of review that highlight long-term goals, and that focus on the degree to which short-term accomplishments pave the way toward those goals. Each member of the faculty and staff, in their annual review materials, is asked to reflect on that deeper vision for themselves and their careers — the kinds of intellectual leadership that they would most like to embody — and then to think about their shorter-term projects in light of those goals. Supervisors and department chairs are asked to treat the annual review process as a moment of checking in on progress and as an opportunity for mentoring, focusing on the objectives and needs of the person under review rather than on the KPIs. This process opens up room for a faculty member to make the case that their goals would best be supported by publishing in nontraditional venues, or by participating in unusual collaborations, and it opens up room for a staff member to describe their desires to grow and develop in their work. And it encourages evaluators to explore ways that they can support that development.

This process, you might be thinking, seems to imply a highly individualized set of evaluation criteria, rather than a standard that can be applied objectively to everyone. It’s true! What this evaluation process rests on, however, is the bedrock of values that the college has collectively articulated and continues to re-articulate for itself. Objectivity is not among those values, in large part because of the ways its presumed neutrality in fact covers a range of inherent biases. Our values instead include transparency, community, and equity: ensuring that our processes are themselves open to evaluation, that we work to support one another, and that we champion a wide diversity of goals and paths toward reaching them. These goals not only require individuated attention to the actual people with whom we work, but also a determination to move away from a review system that focuses on competitive metrics and toward one that facilitates the best work that each of us can do.

A few challenges lie in this values-oriented mode of working, however. As the HuMetrics team discovered, values are not universal; they imply radically different things for different people. Surfacing those differences and figuring out how to honor them is a key component of the articulation of values. And that articulation must be a recurrent, recursive process: circumstances change, communities change, and with each change we must return to our discussions of values to ensure that they appropriately represent us.

Perhaps most importantly, articulating your community’s values and assessing the work done by that community in ways that uphold them only matter if each of the members of your community is held accountable to those values. Breaches of those values must be taken seriously. What that means will differ from community to community, and will vary based on the nature of the breach, but at root level accountability requires an acknowledgement that the value has not been upheld and a commitment to doing better. And this requirement that we hold ourselves accountable applies to everyone in the hierarchy, but it is most important for leaders themselves: if our failures to live out the values we espouse for our communities have no consequences, the values themselves will become meaningless, and we will erode the trust required to make a values-based approach work.

But if we are able to work with our communities to articulate our deepest values, to set our goals in keeping with those values, to create forms of assessment that center those values, and to establish means of remaining accountable to one another for upholding those values — all of this has the potential to radically transform the ways we work, the reasons we work, and the collective joy we bring to that work. And not least, it has the potential to transform our assessment practices from sterile moments of bean-counting that pull us away from the work that’s most important to us, creating in their place moments of deep reflection that feed and support the work itself.

Webmentions

No replies yet.