The (3M) model-mechanistic-mapping criteria of explanation

The 3M (model-mechanistic-mapping) model says that a model of a target phenomena explains that phenomena to the extent that a. the variables in the model correspond to identifiable components, activities and feature and organizational features that produces maintains or underlie the phenomena b. the mathematical dependencies that are posited among the these perhaps mathematical variables within that model correspond to causal relations among the components of that mechanism.

This mechanism-model-mapping (3M) constraint embodies widely held commitments about the requirements on mechanistic explanations and provides more precision about those commitments. The idea of being in compliance with the 3M constraint is shown to have considerable utility for understanding the explanatory force of models in computational neuroscience, and for distinguishing models that explain from those merely playing descriptive and/or predictive roles. Conceiving computational explanation in neuroscience as a species of mechanistic explanation also serves to highlight and clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists.

Slightly less than three decades ago, Philip Johnson-Laird (1983) expressed what was then a mainstream perspective on computational explanation in cognitive science: “The mind can be studied independently from the brain. Psychology (the study of the programs) can be pursued independently from neurophysiology (the study of the machine and the machine code)” (Johnson-Laird 1983, 9; also cited in Dawson 1998). According to this perspective, which Piccinini (2006) usefully dubs computational chauvinism, computational explanations of human cognitive capacities can be constructed and confirmed independently of details about how these capacities are implemented in the brain (Johnson-Laird 1983; Pylyshyn 1984).

Computational chauvinism can be broken down into three interrelated claims. First, computational explanation in psychology is autonomous from neuroscience (C1).3Second, computational notions are uniquely appropriate or proprietary to psychological theory (C2). Fodor endorses this claim when he asserts that, in the “language” of neuroscience, “notions like computational state and representation aren’t accessible” (1998, 96; also cited in Piccinini 2006). If the theoretical vocabulary of computation is proprietary to the domain of psychology, neuroscience’s prospects for providing its own legitimate computational explanations of cognitive phenomena are dim. Third, computational explanations of cognitive capacities in psychology embody a distinct form of explanation—functional analysis or functional explanation—not to be assimilated to other types (C3). Of particular importance is the fact that this includes mechanistic explanation, the kind of explanation prevalent in the neurosciences and other biological sciences (Bechtel 2008; Bechtel and Richardson 1993; Craver 2007). Clearly, if C1-C3 are correct and computational explanations are outside the reach of neuroscience, then the stated goal of this paper—to provide an analysis of computational explanation in neuroscience—is doomed to failure from the outset. Therefore, a preliminary task is to rebut the challenge from computational chauvinism.

The viewpoint of computational chauvinism shares reinforcing connections with the philosophical doctrine of functionalism, once the dominant position in philosophy of mind (Fodor 1974, 1975; Putnam 1960). Functionalists have long maintained that psychology can proceed with its explanatory goals independently of evidence from neuroscience about underlying neural mechanisms. Functionalism is supposed to justify a theoretically principled neglect of neuroscientific data based on the alleged close analogy between psychological processes and running software (e.g., executing programs or algorithms) on a digital computer, and the multiple realizability of the former on the latter. According to the analogy, the brain merely provides the particular hardware on which the cognitive software happens to run, but the same software could in principle be implemented in indefinitely many other hardware platforms. The brain is thus deemed a mere implementation of the software. If the goal is to explain and ultimately understand the structure of the software—what computations are performed—figuring out the hardware is irrelevant and unnecessary. According to the functionalist perspective, neuroscience is at best relegated to pursuing a subordinate and relatively minor role: finding the corresponding neural implementations for computational explanations independently developed in psychology.

Sources
Dawson, M.R.W. (1998): Understanding Cognitive Science. Wiley-Blackwell.

Johnson-Laird, P.N. (1983). Mental Models: Towards a Cognitive Science of Language, 47 Inference and Consciousness. New York: Cambridge University Press.

Piccinini, G. (2006): Computational explanation in neuroscience. Synthese 153:343–353.----- (2007): Computing mechanisms. Philosophy of Science 74:501-526.

Piccinini, G., and Craver, C.F. (forthcoming): Integrating psychology and neuroscience: functional analyses as mechanism sketches. Synthese.

Putnam, H. (1960): Minds and machines. Reprinted in Putnam, 1975, Mind, Language, and Reality. Cambridge: Cambridge University Press.

Pylyshyn, Z. W. (1984): Computation and Cognition. Cambridge, MA: MIT Press.