|
|
Introduction
To "understand" means to be able to formulate
a question that is answered accurately by what one assumes that
one knows, or which at least tells us accurately what we do
not know.1)
Hence if we want to understand what it means to be
"competent" in systemic research practice, we need
first of all to ask what sort of question we are trying to answer
through such competence. As research students pursuing a Ph.D.
or a Master of Science degree here in Lincoln, most of you are,
among other things, interested in systems thinking. You believe
(or perhaps merely hope) that systems thinking is a meaningful
thing to study. You invest personal hopes, time and effort in
order to qualify as a systems researcher. So, if systems thinking
is (part of) the answer, what is the question?
Systemic
thinking and research competence I
think it is indeed important for you to ask yourself this question. The
way you understand "systemic" thinking will shape
your understanding of "competent" research, and vice
versa. For instance, it seems a reasonable starting point to
assume that systemic thinking is about considering "the
whole relevant system" that matters for understanding a
problem or improving a situation. You will thus need to make
sure that your problem definitions are large enough to
include everything relevant; but what does that mean? Since
we live in a world of ever-growing complexity, it could basically
mean you need to do justice to the interdependencies
of any problem with other problems, or of whatever aspects of
the problem you focus on with other aspects of the same problem.
So systemic thinking becomes the "art of interconnected
thinking" (Vester (2007; cf. Ulrich, 2015), and
you need to study methods for supporting such thinking. But
then, making your problems large enough could also mean first
of all to include a larger time horizon than usual, so as to
make sure that problem solutions are sustainable over time;
you would thus want to put the main focus of systemic thinking
on ideas of sustainable development, on ecological and perhaps
also evolutionary thought, and would have to acquire conforming
knowledge and methods of inquiry. With equal right you might
want to say that making problems large enough demands first
of all that one consider the divergent views and interests of
different parties concerned; which would associate systems thinking
with multiple perspectives thinking, stakeholder analysis,
participatory research approaches, and so on. As this short
and very incomplete enumeration makes immediately clear, a "systemic"
perspective lends itself to many different notions of what competent
inquiry and practice can and should mean.
Accordingly
important it is for you to as a research student to ask yourself
what kind of competence you are striving for. The primary concern
is competence, not systems thinking. How can you study successfully
without a clear understanding of your goal? Of course your immediate
goal is to get a degree; but I suppose getting a degree makes
sense only if it is linked to personal learning and growth.
By acquiring some of the specific skills
that you expect from systems thinking, you may wish to deepen your competencies as a future
professional or become a better researcher than you already
are. Or you
feel a need to strengthen your capabilities in general rather
than just as a researcher. Perhaps you already feel confident
about your professional training and experience but would like
to become a more reflective professional or even a more mature
person in general. You may then want to read this essay
thinking of yourself as a "student of competence"
rather than as a "student of systems thinking" and/or
a "student of research"; for students of competence, I take it, we all remain throughout
our lives.
Towards
a personal notion of competence Whatever your individual motives and
state of preparation may be, I cannot formulate "the" relevant
question for you. All I can attempt is to help you find your own individual question,
by offering a few possible topics for reflection. As far as the
paper also offers some considerations as to how you might deal
with these topics, please bear in mind that I
do not mean to claim these considerations are the only relevant
or valid ones (a claim that again would presume one has found
the one, right question to be asked when it comes to competence). I offer them as examples only.
Their choice looks relevant to me at this particular moment
in my academic and personal biography; but you are different
persons and will therefore have to pursue your quest for competence
in your own unique way. Contrary to academic custom, the game
for once is not to be right but only to be true to yourself.
The
Burden of Becoming a "Researcher"
As a research student you are supposed
to do "research." Through your dissertation, you have
to prove that you are prepared to treat an agreed-upon topic
in a scholarly manner; in other words, that you are a competent
researcher. Not
surprisingly, then, you are eager to learn how to be a competent
researcher. But I suspect that few of you are quite sure what
precisely is expected from you. Hence the job of "becoming
a competent researcher" is likely to sound like a tall
order to you, one that makes you feel a bit uncomfortable, to
say the least. What do you have to do to establish yourself
as a "competent" researcher?
From
what you have been told by your professors, you probably have
gathered that being a competent researcher has something to
do with being able to choose and apply methods. Methods,
you have understood, should be appropriate to the problem you
are dealing with and should help you to produce findings and
conclusions that you can explain and justify in methodological
terms. That is to say, you should be able to demonstrate how
your findings and conclusions result from the application of
chosen methods and why methods and results are all valid.
Questions
concerning method Previous
to this seminar, I have spoken to many of you individually and
I have felt that most of you worry a lot about which methods
you should apply and how to justify your choices. It really seems
to be an issue of choice rather than theory. There are so many
different methods! The choice appears to some extent arbitrary.
What does it mean to be a competent researcher in view
of this apparent arbitrariness? You may have turned to the epistemological
literature in order to find help, but what you have found is
likely to have confused you even more. The prescriptions given
there certainly seem abstract and remote from practice, apart
from the fact that the diverse prescriptions often enough appear
to conflict with one other.
As
a second difficulty, once you have chosen a methodology and start
to apply it, you will at times feel a strong sense of uncertainty
as to how to apply it correctly. Methods are supposed
to give you guidance in advancing step by step. You expect them
to give you some security as to whether you are approaching
your research task in an adequate way, so as to find interesting
and valid answers to your research questions. But instead, what
you experience is a lot of problems and doubts. There seem to
be more questions than answers, and whenever you dare to formulate
an answer, there again seems to be a surprising degree of choice
and arbitrariness. What answers you formulate seems to be as much
a matter of choice as what method you use and how exactly you
use it.
Given
this burden of personal choice and interpretation, you may wonder
how you are supposed to know whether your observations and conjectures
are the right ones. How can you develop confidence in their
quality? How can you ever make a compelling argument
concerning their validity? And if you hope that in time,
as you gradually learn to master your chosen method, you will
also learn how to judge the quality of your observations, as
well as to justify the validity of your conclusions, yet a third
intimidating issue may surface: how can you ever carry
the burden of responsibility concerning the actual consequences
that your research might have if it is taken seriously by other
people, for example, in an organization whose problems
you study and which then, based on your findings and conclusions,
may implement changes that cost jobs or affects people in other
ways?
As
a fourth and final example of such worries, your major problem may well be to
define "the problem" of your research, that is,
the issue to which you are supposed to apply methods in a competent
fashion. This is indeed a crucial issue, but here again the
epistemological and the methodological literature is rarely
of help. Its prescriptions seem so remote from your needs!
A
lot of questions to worry about, indeed. But didn't we just
say that without questions there is no understanding? So take
your questions and doubts as a good sign that you are on your
way towards understanding. Let us explore together where this
way might lead you. One thing seems certain: if you do
not try to understand where you want to go, you are not likely
to arrive there!
The
Death of the Expert2) Sometimes
it is easier to say what our goal is not, rather than what it
is. Are there aspects or implications of "competence"
that you might wish to exclude from your understanding of competence
in research? Certainly.
For
instance, in what way do you aim to be an "expert" on systems
methodologies (or any other set of methodologies), and in what
way do you not want to become an "expert"? To be competent
in some field of knowledge means to be an expert, doesn't it?
The role that experts play in our society is so prominent and
seemingly ever more important that a lot of associations immediately
come to our mind. To mention just three: experts seem
to be able to make common cause with almost any purpose; most
of the time (except when they are talking about something we,
too, happen to be experts in) experts put us in the situation of
being "lay people" or non-experts (i.e., incompetent?);
experts frequently cease to reflect on what they are doing and
claiming. So, are there roles you would rather not want to play,
causes you'd rather not serve, as a competent researcher? Are
there circumstances or situations in which you would rather not claim to
be an expert, that is, rely on, and refer to, your "expertise"? Where do you
see particular dangers of ceasing to be self-critical?
Expertise
or the pitfall of claiming too much Ceasing
to be self-critical, with the consequent risk of claiming too
much, is unfortunately very easy. There are so many aspects
of expertise or competence that need
to be handled self-critically! So much seems clear: as
competent researchers we do not want to ignore or conceal the limitations of the methods
on which our competence depends – "methods"
in the widest possible sense of any systematically considered
way to proceed. The
limitations of a method are among its most important characteristics;
for if we are not competent in respecting these limitations,
we are not using the method in a competent manner at all. Hence, one of the
first questions we should ask about every method concerns its
limitations.
Technically
speaking, the limitations of a method may be said to be contained
in the theoretical and methodological assumptions that
underpin any reliance on it. Some of these may be built into a
method we use; others may arise due to the imperfect ways in
which we use it, for example, if we don't master the method or
use it in biased ways.
Perhaps
an even more basic assumption is that experts, by virtue
of their expertise, have a proper grasp of the situations to which they apply their
expertise, so that they can properly judge
what method is appropriate and this choice will then ensure valid
findings and conclusions. Experts often seem to take such assumptions
for granted, or else tend to cover them behind a facade
of busy routine.
Sources
of deception To
the extent that we are insensible to these assumptions, they
threaten to become sources of deception. We ourselves
may be deceived as researchers, but inadvertently we may also
deceive those who invest their confidence in our competence.
There need not be any deliberate intention to deceive others
on the part of the researcher; it may simply be routine
that stops researchers from revealing to themselves and to others concerned the specific assumptions that flow into every concrete
application of their expertise. Even so, this is probably not what you would like to understand
by "competence."
The
earlier-mentioned questions and doubts that plague many a research
student are then perhaps indeed a healthy symptom that your research
competencies have not yet reached the stage of routine where
such lack of reflection threatens. This danger is more of a
threat to established researchers who have already become recognized
as experts in their field of competence. Although some degree
of routine is certainly desirable, it should not be confused
with competence. Routine implies economy, not competence.
When
experts forget this distinction, they risk suffering the
silent death of the expert. It seems to me at times that
in our contemporary society, the death of the expert has taken
on epidemic dimensions! We are facing an illness that has remained
largely unrecognized or incorrectly diagnosed, perhaps because
it causes an almost invisible death, one that often enough is
hidden by the vigorous and impressive behavior patterns of those
who have developed the disease.
There is a second cause of the death of the expert that we must
consider. Even if a researcher remains thoroughly aware of the
methodological and theoretical underpinnings of his or her competence
and makes an appropriate effort to make them explicit, does
that mean that the research findings provide a valid ground
for practical conclusions? This is often assumed to be the case,
but repeated assumption does not make a proposition valid. A
sound theoretical and methodological grounding of research –
at
least in the usual understanding of "theory" and "methodology"
– implies at best the empirical (i.e., descriptive) but
not the normative (i.e., prescriptive) validity of the
findings. Well-grounded research may tell us what we can and
cannot do, but this is different from what we should
do on normative grounds.
The
virtue of self-limitation When
it comes to that sort of issue, the researcher has no advantage
over other people. Competence in research then gains another
meaning, namely, that of the researcher's self-restraint.
No method, no skill, no kind of expertise answers all the questions
that its application raises. One of the most important aspects
of one's research competence is therefore to understand the
questions that it does not answer.
The
number of questions that may be asked is, of course, infinite
– as is, consequently,
the number of questions that competence cannot be expected to
answer.
You have thus good reason to worry about the meaning of competence
in research. If you want to become a competent researcher, you
should indeed never stop worrying about the limitations of your
competence! As soon as you stop worrying, the deadly disease
may strike. The goal of your quest for competence is not to
be free of worries but rather to learn to make them a source
of continuous learning and self-correction. That is
the spirit of competent research. Competence in research does
not mean that research becomes a royal road to certainty. What
we learn today may (and should) always make us understand that
what we believed yesterday was an error. The more competent
we become as researchers, the more we begin to understand that
competence depends more on the questions we ask than on the
answers we find. It is better to ask the right questions
without having the answers than to have the answers without
asking the right questions. If we do not question our answers
properly, we do not understand them properly, that is, they
do not mean a lot.
This
holds true as much in the world of practice as in research,
of course. The difference may be that under the pressures of
decision making and action in the real world, the process of
questioning is often severely constrained. It usually stops
as soon as answers are found that serve the given purpose. As
a competent researcher, your focus will be more on the limitations of the answers and less on limiting the questions.
This is what a researcher's well-understood self-limitation is
all about.
A
preliminary definition of competence in research Your
tentative first definition of competency in research, then,
might be something like this (modify as necessary):
Competence
in research means to me pursuing a self-reflective, self-correcting,
and self-limiting approach to inquiry. That
is, I will seek to question my inquiry in respect of all conceivable
sources of deception, for example, its (my) presuppositions,
its (my) methods and procedures, and its (my) findings and the way I translate
them into practical recommendations.
In
this tentative definition, the pronoun "its"
refers to the inherent limitations of whatever approach to inquiry
I may choose in a specific situation, limitations that are inevitable
even if I understand and apply that approach in the most competent
way. The pronoun "my," in contrast, refers to my personal
limitations in understanding and applying the chosen approach.
Accordingly, the essential underlying question is how as a researcher
you are to deal adequately with these limiting factors in the
quest for relevant, valid, and responsible research. The three
underlined phrases stand for key notions in my personal attempt
to respond to this question. Given their personal nature, I
encourage you to interpret, question and modify them according
to your own experiences, needs, and hopes. Do not allow your
thinking to be limited by them! The only reason I introduce
them here is that they inform my personal concept of research
and thus may help you in better understanding (and thus questioning)
the reflections on the nature of competent research offered
in this essay.
A
major implication of this preliminary definition is the following.
Competence in research means more – much more – than mastering some research
tools in the sense of knowing what methodology to choose
for a certain research purpose and how to apply it in
the specific situation of interest. Technical mastery, although
necessary, is not equal to competence. It becomes competence
only if it goes hand in hand with at least two additional requirements:
(a)
that we learn to cultivate a continuous (self-) critical observation of the built-in limitations
of a chosen research approach
– "observing" its limitations, that is, in the double sense of "understanding"
and "respecting" them; and, perhaps even more importantly and certainly
more radically,
(b)
that we quite generally renounce the
notion that we can ever justify the validity of our eventual
findings by referring to the proper choice and application of
methods.
The
obvious reason for (b) is that justifying findings by virtue
of methods does little to justify the selectivity of
those findings regarding both their empirical and their normative
content, that is, the circumstances taken to be relevant for
understanding a situation and the criteria considered adequate
for evaluating or improving it. Selectivity results from inherent limitations of methods
as well as from the limitations of our resources and understanding
in applying them (which is not
to say that there are no other sources of selectivity, such
as personal world views and interests or institutional, political
and economic mechanisms and pressures).
The
limited justificatory power of methods is bad news, I fear, for some of you who probably have been
taught to base your search for
competence on the idea of a theoretically based choice among
methodologies. To be sure, there is nothing wrong
with this idea – so long as you do not expect it to ensure critical
inquiry. The notion of securing critical inquiry and
practice through theoretically based methodology choice is currently
prominent in systems research and particularly in the methodological
discussions around the notion of critical systems thinking (CST);
but I invite you to adopt it
with caution. It does not carry far enough.3)
Further
sources of orientation and questioning We must
ask, then, what else can give us the necessary sense
of orientation and competence in designing and critically assessing
our research, if not (or not alone) the power of well-chosen
methods? I suggest that you consider first of all the following
three additional sources of orientation that I have found valuable
(among others), namely:
•
understanding your personal quest for "improvement"
in each specific inquiry;
•
observing what following Kant I call "the primacy of
practice in research";
•
recognizing and using the significance of C.S. Peirce's "pragmatic
maxim."
Further
considerations will then concern the concepts of
•
"systematic boundary critique";
•
"high-quality observations";
•
cogent reasoning or compelling argumentation;
•
mediating between theory and practice (or science and politics); and finally,
•
the "critical turn" that informs my work on critical
systems heuristics.
The
Quest for Improvement One of the sources
of orientation that I find most fundamental for myself is continuously
to question my research with regard to its underlying concept
of improvement. How can I develop a clear notion of what, in
a certain situation, constitutes "competent" research,
without a clear idea of the difference it should make?
The
"difference it should make" is a pragmatic rather
than merely a semantic category, that is, it refers to the implications
of my research for some domain of practice. If I am pursuing
a purely theoretical or methodological research purpose, or
even meta-level research in the sense of "research on research,"
the practice of research itself may be the domain of practice
in which I am interested primarily; but when we do
"applied" research in the sense of inquiry into some
real-world issue, it will have implications for the world of
social practice, that is, the life-worlds of individuals and
their interactions in the pursuit of individual or collective
(organizational, political, altruistic, etc.) goals.
In
either case I will need to gain a clear idea of the specific
domain of practice that is to be improved, as well as of the
kind of improvement that is required. One way to clarify this
issue is by asking what group of people or organizations belong
to the intended "client" (beneficiary) of a research
project, and what other people or organizations might be affected, whether in a desired or undesired way. (Note that
from a critical point of view, we must not lightly rule out
the possibility of undesired side-effects; hence, when we
seek to identify the people or organizations that might be affected,
we should err on the side of caution and include all those whom
we cannot safely assume not to be affected.) Together
these groups of people or organizations constitute the domain
of practice that I will consider as relevant for understanding
the meaning of "improvement."
What
makes research valuable? Once
the client and the respective domain of practice are clear,
the next question concerns the sort of practice that my research
is supposed (or, critically speaking, likely) to promote. The
competence of a research expresses itself not by its sheer beauty
but by its value to the practice it is to support. In order
to have such value, it must be relevant – answer the
right questions; and valid – give the right answers.
But
how can we, as researchers, claim to know (i.e.,
stipulate) the kind of practice to which we should contribute?
Have we not been taught long enough that competent ("scientific")
inquiry should refrain from being purpose and value driven?
The
German sociologist and philosopher of social science Max Weber
(1991, p. 145) has given this concern its most famous formulation:
"Politics is out of place in the lecture room." I
can appreciate Weber's critical intent, namely, that academic
teaching should be oriented towards theory rather than towards
ideology. But can that mean, as Weber is frequently understood,
that research is to be "value-free"?4)
A better conclusion,
in my opinion, would be that as researchers we must make it
clear to ourselves, and to all those concerned, what values our
research is to promote and whose values they are; for
whether we want it or not, we will hardly ever be able to claim
that our research serves all interests equally. We cannot gain
clarity about the "value" (relevance and validity)
of our research unless we develop a clear notion of what kind
of difference it is going to make and to whom. A clear sense
of purpose is vital in competent research.
If
you have experienced blockages in advancing your project, for
example in defining research strategies and so on, ask yourself
whether this might have to do with the lack of a sense of purpose.
When you do not know what you want to achieve, it is very
difficult indeed to develop ideas. Conversely, when your
motivation and your vision of what you want to achieve are clear,
ideas will not remain absent for long. Your personal vision
of the difference that your research should make can drive the
process of thinking about your research more effectively than
any other kind of reflection.
The
Primacy of Practice As research students
studying for a Ph.D. or M.Sc. degree, your preoccupation with
the question of "how" to do proper research is sound.
But as we have just seen, the danger is that as long as you
put this concern above all others, it will remain difficult
to be clear about what it is that you want to achieve. For it
means that you rely unquestioningly on a very questionable assumption,
namely, that good practice (P) – "practice" in
the philosophical sense of praxis rather than in the
everyday sense of "exercise" – is a function (f) of
proper research (R), whereby "proper" essentially
refers to adequate research methodology:
P = f (R)
Good
research should of course serve the purpose of assuring good
practice; but does it follow that the choice of research approaches
and methods should determine what is good practice? I do not
think so. Quite the contrary, it seems to me that good research
should be a function of the practice to be achieved:
R = f (P)
Your
primary concern, then, should not be how to do proper
research but what for. This
conjecture requires an immediate qualification, though, concerning
the source of legitimation for the "what for."
Note that in our inverted formula, practice (P) is no longer
the dependent variable but is now the independent variable.
This is precisely as it should be:
It is not up to the researcher to determine what is the right
(legitimate) "what for." Rather, it is the researcher's
obligation to make it clear to herself or himself, and to all
those concerned, what might be the practical implications of
the research, that is, what kind of practice it is
likely to promote –the factual "what for."
After
that, practice must itself be responsible for its purposes
and measures of improvement. Researchers may be able to
point out ways to "improve" practice according to
certain criteria, but they cannot delegate to themselves the
political act of legitimizing these criteria (cf. Ulrich, 1983,
p. 308). It is an error to believe that good practice can
be justified by reference to the methods employed. Methods need
to be justified by reference to their implications for practice,
not the other way round!
In
competent research, the choice of research methods and standards
is secondary, that is, a function of the practice to be achieved.
Good practice cannot be justified by referring to research competence.
Hence, let your concern for good research follow your concern
for understanding the meaning of good practice, rather than
vice versa.
The
suggested primacy of the concern for the outcome of a research
project over the usually prevailing concern for research methodology
(the "input," as it were) is somewhat analogous to
the primacy that Kant assigns to the practical over the theoretical
(or speculative) employment of reason, or to what he refers
to as the "primacy of practical reason in its union with
speculative reason" (Kant, 1788, A 215, 218; cf. 1787,
B 824f, 835f). Theoretical reason can generate valid knowledge
only within the bounds of experience; but practical reason can
conceive of ideas such as the moral idea that help us ensure
good practice and thereby to create a new, different reality.
Theoretical reason can tell us what we can and can't do and
how to achieve it, but not what for (to what ends and according
to what standards) we should try to achieve it. For Kant it
is therefore practical-moral rather than theoretical-instrumental
reasoning that has to play a leading role in the way we use
reason, for it alone can lead us beyond the latter's limitations. I would therefore like to think of
our conclusion in terms of a primacy of practice in research.
But again, the point is not that it is upon the researcher to determine the right
"what for"; the point is, rather, that well-understood
reasoning involves a normative dimension for which theoretical
and methodological expertise does not provide a privileged qualification.
Towards
a two-dimensional concept of research competence Accordingly,
the concept of competent
research that I suggest here is based on Kant's two-dimensional
concept of reason. This distinguishes it from the more usual
concept of
competence in research and professional practice that is implicit in most
contemporary conceptions of knowledge and of science and which
has lost sight of
the normative dimension of rationality. I am thinking particularly
of the model of empirical-analytic science (so-called science-theory)
that has come to dominate the actual practice of science in
many domains, a model that is rooted in the logical empiricism
of the so-called Vienna Circle of the 1930s (Schlick, Carnap,
Reichenbach and others) but which has since been developed and
has found its most widely adopted expression today in the work
of Popper (1959, 1963, 1972) on "critical rationalism."
Symptomatically, Popper replaced Kant's primacy of practical
over theoretical reason with a one-sided primacy of theory,
a model that in effect reduces practical to instrumental reason
while relegating practical reasoning properly speaking, including
moral reasoning, to a merely subjective and indeed non-rational
status. For those interested, I have elsewhere explained and
discussed this prevalent but impoverished model of rationality,
for which the reach of reason is equal to that of science, in
detail (see, e.g., Ulrich, 1983 and 2006c).
To
conclude this brief discussion of the suggested primacy of practice
in research, let us consider an example of what it means in
actual research practice. Research into poverty provides a good
illustration with which I am familiar through my own engagement
in this field (Ulrich and Binder, 1998).
Poverty researchers are often expected to tell politicians "objectively"
how
much poverty there is in a certain population and what can be
done about it. But measuring poverty is not possible unless
there are clear criteria of what standards of income, well-being,
and participation
in society (both material and immaterial) are to be considered
"normal" for a decent life and accordingly should
be available to all members of that population. If poverty research
is to be done in a competent way, so that it can tell us who
and how many of us are poor and what are their needs, there
must first be a concrete vision of the kind of just society
to be achieved! This is what I mean by the primacy of practice
in research.
The
Pragmatic Maxim The orientation provided
by a well-understood primacy of practice must not be confused
with mere "pragmatism" in the everyday sense of orientation
toward what "works" or serves a given purpose. The
point is not utilitarianism but the clarity of our thinking
that we can obtain through clarity of purpose. This idea was
first formulated by Charles S. Peirce (1878) in his pragmatic
maxim, in a now famous paper with the significant title
"How to make our ideas clear":
Consider
what effects, which might conceivably have practical bearings,
we conceive the object of our conception to have. Then, our
conception of these effects is the whole of our conception of
the object. (Peirce, 1878, para. 402)
The
pragmatic maxim requires from us a comprehensive effort to bring
to the surface and question the implications (i.e., the actual
or potential consequences) that our research may have for the
domain of practice under study. Contrary to popular pragmatism,
according to which "the true is what is useful," the
pragmatic maxim for me represents a critical concept. The true
is not just what is useful but what considers all practical
implications of a proposition, whether it supports or runs counter
to my purpose. Uncovering these implications thus becomes an
important virtue of competent inquiry and design in general,
and of critical systems thinking in particular.
Pragmatism
calls for a critical stance There is a crucial
critical kernel in the pragmatic maxim that we need to
uncover and move to the center of our understanding of pragmatism.
I understand it as follows. Identifying the actual or conceivable
consequences of a proposition, as Peirce requires it of a pragmatic
researcher, is not a
straightforward task of observation and reasoning but raises difficult theoretical
as well as normative issues. Theoretically speaking, the question
is, what can be the empirical scope of our research? Normatively
speaking, the question is, what should we consider as relevant
practical implications? Peirce's solution is of
course to consider all conceivable implications; but
for practical research purposes that answer begs the question.
The question is, how can we limit the inquiry to a manageable
scope yet claim that its findings and conclusions are relevant
and valid? The quest for comprehensiveness is reserved to heroes and gods;
it is beyond the reach of ordinary researchers. What we ordinary
researchers recognize as relevant implications depends on boundary
judgments by which we consciously or unconsciously delimit
the situation of concern, that is, the totality of "facts"
(empirical circumstances) and "norms" (value considerations)
that determine the definition of "the problem" and
its conceivable "solutions." The response to Peirce's challenge
can thus only be that we must make it clear to ourselves, and
to all others concerned, in what way we (or they) may fail to
be comprehensive, by undertaking a systematic critical effort
to disclose those boundary judgments.
Systematic
Boundary Critique In Critical Heuristics
(Ulrich, 1983, see esp. Chapter 5), I conceived of this critical
effort as a process of systematic boundary critique,5)
that is, a methodical process of reviewing boundary judgments
so that their selectivity and changeability become visible.
Table 1 shows a list of boundary questions that
can be used for reviewing a claim's sources of selectivity;
you’ll find elsewhere more complete accounts of the boundary
categories that inform these questions, and of the underlying
framework of critical systems heuristics (CSH).6)
Table 1: Sources
of selectivity: The boundary questions of
critical systems heuristics
(Adapted from Ulrich, 1984,
p. 338-340; 1987, p. 279; 1993, p. 597;
1996a,
pp. 24-31; 2000, p. 258)
|
SOURCES
OF MOTIVATION
(1) Who is (ought to be) the client? That is, whose interests are (should be) served?
(2) What is (ought to be) the purpose? That is, what are (should be) the consequences?
(3) What is (ought to be) the measure of improvement? That is, how can (should) we
determine whether and in what way the consequences, taken together, constitute an
improvement?
|
SOURCES
OF POWER
(4) Who is (ought to be) the decision maker? That is, who is (should be) in a position to change
the measure of improvement?
(5) What resources are (ought to be) controlled by the decision maker? That is, what conditions
of success can (should) those involved control?
(6) What conditions are (ought to be) part of the decision-environment? That is, what conditions does
(should) the decision maker not control (e.g., from the viewpoint of those not involved)?
|
SOURCES
OF KNOWLEDGE
(7) Who is (ought to be) involved as a professional? That is, who is (should be) involved as an
expert, e.g., as a system designer, researcher, or consultant?
(8) What expertise is (ought to be) consulted? That is, what counts (should count) as relevant
knowledge?
(9) What or who is (ought to be) assumed to be the guarantor? That is, what is (should) be
considered a source of guarantee (e.g., consensus among experts, stakeholder involvement,
support of decision-makers, etc.)?
|
SOURCES
OF LEGITIMATION
(10) Who is (ought to be) witness to the interests of those affected but not involved? That is, who
is (should be) treated as legitimate stakeholder, and who argues (should argue) the case of
those stakeholders who cannot speak for themselves, including the handicapped, the
unborn, and non-human nature?
(11) What secures (ought to secure) the emancipation of those affected from the premises and
promises of those involved? That is, where does (should) legitimacy lie?
(12) What world view is (ought to be) assumed? That is, what different visions of
improvement are (should be) considered and somehow reconciled?
|
|
For
me this critical effort of disclosing and questioning boundary
judgments serves a purpose that is relevant both ethically and
theoretically. It is relevant theoretically because it compels
us to consider new "facts" that we might not
consider otherwise; it is relevant ethically because these new
facts are likely to affect not only our previous notion of what
is empirically true but also our view of what is morally legitimate,
that is, our "values."
To
be sure, what
I propose to you here is not as yet a widely shared concept
of competence in research, but I find it a powerful concept
indeed. Once we have recognized the critical significance of
the concept of boundary judgments, we cannot go back to our
earlier, "pre-critical" concept of competent research
in terms of empirical science only. It becomes quite impossible
to cling to a notion of competent research that works in just
one dimension. This is so because what we recognize as "facts"
and what we recognize as "values" become interdependent.
The
question of what counts as knowledge, then, is no longer one
of the quality of empirical observations and underpinning theoretical
assumptions only; it is now also a question of the proper bounding
of the domain of observation and thus of the underpinning value
judgments as to what ought to be considered the relevant
situation of concern. What counts as knowledge is, then, always
a question of what ought to count as knowledge. We can
no longer ignore the practical-normative dimension of research
or relegate it to a non-rational status.
End
of Part 1/2, continued with Part 2/2 >>
|
|
|
|
Notes
1) The British philosopher, historian,
and archaeologist R.G. Collingwood (1939/1983, 1946) was perhaps
the first author to systematically discuss the logic of
question and answer as a way to understand the meaning of everyday
or scientific propositions. As he explains in his Autobiography
(1939):
I
began by observing that you cannot find out what a man means
by studying his spoken or written statements, even though he
has spoken or written with perfect command of language and perfectly
truthful intention. In order to find out his meaning you must
also know what the question was (a question in his mind, and
presumed by him to be in yours) to which the thing he has said
or written was meant as an answer. It
must be understood that question and answer, as I conceived
them, are strictly correlative.… [But then,] if you cannot tell
what a proposition means unless you know what question it is
meant to answer, you will mistake its meaning if you make a
mistake about that question.… [And further,] If the meaning
of a proposition is correlative to the question it answers,
its truth must be relative to the same thing. Meaning,
agreement and contradiction, truth and falsehood, none of these
belonged to propositions in their own right, propositions by
themselves; they belonged only to propositions as the answers
to questions. (Collingwood, 1939/1978, pp. 31 and 33, italics
added)
While
remaining rather neglected in fields such as science theory
and propositional logic, it was in the philosophy of history
(the main focus of Collingwood, esp. 1946), along with hermeneutics
(Gadamer, 2004), and argumentation theory (Toulmin, 1978,
2003) that Collingwood's notion of the logic of question and
answer was to become influential. In hermeneutic terms, the
questions asked are an essential part of the hermeneutical horizon
that shapes what we see as possible answers and what meaning
and validity we ascribe to them. In his seminal work on hermeneutics,
Truth
and Method, Gadamer (2004) notes:
Interpretation
always involves a relation to the question that is asked of
the interpreter.… To understand a text means to understand this
question.… We understand the sense of the text only by acquiring
the horizon of the question – a horizon that, as such, necessarily
includes other possible answers. Thus the meaning of a sentence
… necessarily exceeds what is said in it. As these considerations
show, then, the logic of the human sciences is a logic of the
question. Despite Plato we are not very
ready for such a logic. Almost the only person I find a link
with here is R.G. Collingwood. In a brilliant and telling critique
of the Oxford "realist" school, he developed the idea
of a logic of question and answer, but unfortunately never elaborated
it systematically. He clearly saw that … we can understand a
text only when we have understood the question to which it is
an answer. (Gadamer, 2004, p. 363)
[BACK]
2)
As I found out after writing the original working paper (Ulrich,
1998a), the phrase "death of the expert" is not mine. White
and Taket (1994) had used it before. By the time I prepared
the expanded version of the essay for Systems Research and
Behavioral Science (Ulrich, 2001a), I had become aware of
their earlier use of the phrase and accordingly gave a reference
to it. My discussion here remains independent of theirs, but
I recommend readers to consult their different considerations
as well.
[BACK]
3)
We'll return to this issue under the heading of "methodological
pluralism" below. For a systematic account and critique of the identification
of critical practice with methodology choice in this strand
of critical systems thinking (CST), see Ulrich (2003) and the
ensuing discussions in several subsequent "Viewpoint"
sections of the journal. Readers
not familiar with CST may find Ulrich (2012e or 2013b) useful
preparatory reading. [BACK]
4)
I have given an extensive critical account of Weber's notion
of "value-free" interpretive social science and his
underlying conception of rationality elsewhere, see Ulrich (2012b).
We will return to Weber's "interpretive social science"
in the section on theory and practice below. [BACK]
5)
I use the term "boundary critique" as a convenient
short label for the underlying, more accurate concept of a "critical employment
of boundary judgments." The latter is more accurate in
that it explicitly covers two very different yet complementary
forms of "dealing critically with boundary judgments."
It can be read as intending both a self-reflective
handling of boundary judgments (being critical of one's own
boundary assumptions) and the use of boundary
judgments for critical purposes against arguments that do not
lay open the boundary judgments that inform them (arguing critically
against hidden or dogmatically imposed boundary assumptions).
By contrast, the term "boundary critique" suggests
active criticism of other positions and thus, as I originally
feared, might be understood only or mainly in the second
sense. While this second sense is very important to me, the
first sense is methodologically more basic and must not be lost.
I would thus like to make it very clear that I always intend
both meanings, regardless of whether I use the original full
concept or the later short term.
Terms do not matter so much
and represent no academic achievement by themselves, only the
concepts or ideas for which they stand do and these should accordingly
be clear. The concept of a critical employment of boundary judgments
in its mentioned, double meaning embodies the methodological core principle of my work
on critical systems heuristics (CSH) and accordingly can be
found in all my writings on CSH from the outset (e.g., Ulrich,
1983, 1984, 1987, 1988, 1993, etc.). Only later, beginning in
1995, I introduced the short label "boundary critique"
(see, e.g., Ulrich, 1995, pp. 13, 16-18,
21; 1996a, pp. 46, 50, 52; 1996b, pp. 171, 173, 175f; 1998b,
p. 7; 2000, pp. 254-266; and 2001, pp. 8, 12, 14f,
18-20, 24). Meanwhile I have increasingly come to find it a
very convenient label indeed, so long as it is clear
that both meanings are meant (and in this sense I use it as
a rule). Accordingly I am now employing it
regularly and systematically (cf., e.g., Ulrich, 2002, 2003, 2005, 2006a,
2012 b, c, d; 2013b; and most recently,
2017).
[BACK]
6)
The boundary questions presented here are formulated so that
the second part of each question defines the boundary category
at issue. For introductions of varying depth and detail to the boundary
categories and questions of CSH, see, e.g., Ulrich, 1983, esp. pp. 240-264; 1984,
pp. 333-344; 1987, p. 279f; 1993, pp. 594-599; 1996a, pp. 19-31,
43f; 2000, pp. 250-264; 2001a, pp. 250-264; and 2001b, pp. 91-102.
On CSH in general, as well as the way it informs my two research
programs on "critical systems thinking (CST) for citizens"
and on "critical pragmatism," also consult:
Ulrich 1988, 2000, 2003, 2006a, b, 2007a, b, 2012b, c, d, 2013b,
and 2017, and Ulrich and Reynolds, 2010.
[BACK]
|
|