Psychodynamic Research
History and Philosophy
The first thing to understand about unconscious processes is why it took so long for systematic research to begin focusing on it. Whyte (1960) argued that this delay was due to the influence of Cartesian philosophy.
Descartes (1637, 1642) famously used thought to prove his existence (“cogito ergo sum” — I think therefore I am). He then divided the universe into two substances: mental and physical. Mental, thought, was defined as consciousness. One cannot say “I think, although I am unaware of doing so, therefore I am” (Weinberger & Stoycheva, 2020). What this means is that anything that is not conscious is not thought and must therefore be physical. The early pioneers of psychology, like Wundt and Titchener, in accord with this view, defined psychology as the study of consciousness. This meant that the unconscious was ruled out as a psychological phenomenon (cf. Whyte, 1960). This ban continued for decades (see e.g., Jackson, 1958; Klein, 1977). Whenever unconscious processes were posited, they were refuted by arguing that conscious processes cannot be ruled out or by declaring unconscious thought to be an impossibility (e.g., Goldiamond, 1958). This kind of reasoning lasted well into the 1980s (Hollendar, 1986).
Psychoanalysis
The major exception to all of this was psychoanalysis. Freud (1915) attempted to demonstrate the necessity of positing an unconscious. Later schools of psychoanalysis also focused on unconscious processes. However, for the most part, they relied on case studies rather than empirical research. The few exceptions (e.g., Silverman, Lachman, & Milich, 1982; Silverman and Weinberger, 1986), were outside of the mainstream and ignored by most academic researchers (Weinberger & Stoycheva, 2020). (For the interested reader, Bornstein and Masling (1998) edited a volume on psychoanalytically inspired empirical approaches to unconscious processes. But again, this is the exception.)
Implicit Motives
The study of implicit motives, pioneered by David McClelland (1987), starting in the 1950s, bucked the zeitgeist. McClelland and his colleagues found that people have unconscious psychosocial motives. These include achievement, power, affiliation, and intimacy motivation which can be assessed through rigorous scoring systems applied to stories people tell to TAT (Thematic Apperception Test) cards (Smith, 1992). Implicit motives were shown to predict important life outcomes like economic activity (McClelland, 1961) and the behaviors of politicians (Winter, 2005). Other work indicated that implicit motives predict spontaneous and long term behavior whereas explicit (conscious) motives predict short-term, focused behavior (McClelland, Koestner, & Weinberger, 1989; Weinberger & McClelland, 1990). More recent work tied these motives to hormone profiles (Schultheiss, 2013), looked at how they predict mental health (Weinberger, Chassman & Delgado, in press), as well as relationship activities (Weinberger, Purcell, & Knafo, in press). The latest compendium summarizing implicit motive research is Schultheiss & Brunstein (in press).
Heuristics
The rest of the field finally woke up to the existence of unconscious processes in the 1990s, probably as a result of the cognitive revolution (cf. Weinberger & Stoycheva, 2020). Research and theory exploded in cognitive science, social psychology, and cognitive neuroscience. Kahneman and Tversky’s work (Kahneman, 2011; Tversky & Kahneman, 1981), which garnered a Nobel Prize in economics, is probably the most well-known. They reported that people used cognitive shortcuts, termed heuristics, to solve problems. Most of the time this works but logical flaws could be demonstrated through carefully designed experiments. Importantly for the purposes of this exposition, people were completely unaware of the nature of these strategies or even that they were using one at all.
Implicit Memory
The field of memory was revolutionized through the study of implicit memory, pioneered by Daniel Schacter (1987). In a nutshell, implicit memory refers to a person acting as if they recall an experience while denying recollection of that experience. The early work centered on brain damaged individuals (e.g., Milner, 2005) but a great deal of current research employs “normals” (e.g., Squire, 2009). There are many competing theories on what underlies implicit memory and its place in human functioning but no one denies its importance. For a review, see Weinberger & Stoycheva (2020).
Implicit Learning
Implicit learning refers to learning without conscious awareness. Contingencies and stimuli are presented so as to be unnoticed but people learn them anyway. A compendium of classic implicit learning studies are those of Lewicki (1986) who exposed participants to stimuli that covaried lawfully with one another. Performance on a subsequent task was influenced by the previous unnoticed covariations. Participants were unaware of, let alone capable of describing, the covariations that had affected them. Another classic paradigm were the artificial grammar studies of Reber (1989, 2013). These involved presenting strings of letters that followed a set of rules. After viewing several strings, participants were asked to memorize a second set and/or judge whether the members of a second set were consistent with the first. Both tasks improved with practice. There are hundreds of implicit learning studies like these. Weinberger & Stoycheva (2020) review these studies as well as the various theories that try to explain them.
Attribution Theory
Attribution theory refers to the fact that people understand the behaviors of others and themselves by attributing causes to them (Heider, 1958). Originally, these attributions were thought to be conscious. A classic paper by Nisbett and Wilson (1977) showed that they were not, which is now the accepted view (Strack & Deutsch, 2015; Weiner, 1986).
A main finding of attribution theory, termed correspondence bias, refers to people’s tendency to attribute their own behavior to the environment and the behavior of others to their personality (Gilbert & Malone, 1995; Jones & Harris, 1967). One exception is that personal success tends to be credited to stable and internal factors whereas failure is likely to be blamed on the environment (Campbell & Sedikides, 1999; Sedikides, Campbell, Reeder, & Elliot, 1998). Another bias, identified by Baumeister, Bratslavski, Finkenauer, and Vihs (2001), was that people tend to weigh negative more heavily than positive information when judging others’ motivation. Schyns and Hansbrough (2008) reported that this effect is particularly prominent in organizational settings; people tend to interpret errors as the direct result of leadership incompetence, regardless of the error’s origin.
Cusimano and Goodwin (2019, 2020) identified another attributional bias. They (2019) demonstrated that people tend to believe that others have control over their inner psychological states (emotions, desires, beliefs, and attitudes with attitudes and beliefs judged most, desires less, and emotions least controllable. Moreover, level of control was related to ratings of responsibility and blame and, to a lesser extent, to stable personality characteristics. Focusing on beliefs, these authors (2020) compared the level of control attributed to self and other. Like Cusimano and Goodwin (2019), they found that whereas others are seen as being able to control and therefore change their beliefs, this did not hold true for evaluating their own beliefs, which were seen as uncontrollable and therefore not subject to change. Two implications flow from this: People are likely to blame others for holding beliefs contrary to theirs while holding themselves blameless for their own beliefs.
To explain these kinds of biases, Bar-Anan, Wilson, and Hassin (2010) argued that people attribute actions to accessible, plausible, and/or self-promoting causes. In five experiments, they primed a goal that influenced behavior but offered an incorrect, plausible, and accessible, explanation. Subjects chose this incorrect explanation every time. Bar-Anon et al. saw these results as having two stages. The person was primed unconsciously, automatically activating a goal. Since the subjects were unaware of this activation, they did not know why they behaved as they did. They attributed their behavior to whatever was accessible (not the real reason), plausible (depending upon what they were told), and self-serving (what fit with their self-image).
Automaticity
John Bargh (e.g., 1994) defines automaticity as an action or cognition that goes off without thought (automatically). Once mental processes (either conscious or unconscious) have been sufficiently rehearsed, they begin operating in an automatic fashion (Bargh, Schwader, Hailey, Dyer, & Boothby, 2012). Automaticity can be invoked to account for relationship patterns, implicit bias, persistence of psychopathology, and a host of other phenomena. Weinberger & Stoycheva (2020) review the research and theory.
Embodied Cognition
Embodied cognition avers that thought parallels and is based on the body, largely sensory and motor functioning. Our cognitions tend to be tied to our orientation in time and space, as well as to our sensory experiences. Importantly, these embodied cognitions mostly take place unconsciously (Winkielman, Niedenthal, Wielgosz, Eelen, & Kavanagh, 2015).
Williams and Bargh (2008a) demonstrated that the experience of physical warmth promoted feelings of interpersonal warmth. They asked participants to hold a hot or cold beverage. Those who held a hot beverage, as opposed to a cold one, rated a target person as higher in characteristics related to warmth. In another experiment, they found that holding something warm promoted pro-social behavior. The participants were not aware of the impact that the physical experiences of warmth or coldness had on their evaluations and behavior. Zhong and Leonardelli (2008) asked participants in two groups to recall an experience where they felt socially included or excluded. Those who recalled an inclusion experience estimated the room temperature to be higher than those who recalled an exclusion experience. In a second experiment, the authors induced feelings of social inclusion or exclusion through a virtual interaction. The dependent variable was the likelihood of seeking warm food or drink (hot coffee or soup) vs. cold food or drink (apple/crackers, or a cold Coke). Consistent with Williams and Bargh’s (2008a) findings, the experience of social exclusion led people to seek out warmth, in the form of hot coffee or soup, more than did the social inclusion group.
The properties of the space we occupy and how we position ourselves in space have been found to impact appraisals of self, others, and the world. Meier and Robinson (2006) found that people who scored relatively high on neuroticism and depressive symptomatology, tended to spot targets more quickly when they were in the lower visual field. Distance is another property of space that has been related to perception of social relationships. Williams and Bargh (2008b) reported that simply asking participants to place dots closer or farther apart (thus priming the concept of physical closeness/distance) led them to evaluate their emotional bond with family as stronger or weaker, respectively. Other researchers demonstrated links between the concepts of physical expansion and greater self-actualization (Landau, Vess, Arndt, Rotschild, Sullivan, & Atchley, 2010).
A number of studies have connected taste sensations with appraisals of self and others. Meier, Riemer-Peltz, Moeller, and Robinson (2012) reported that sweetness was associated with more positive self- and other-ratings. They found that tasting a sweet treat (but not a non-sweet treat or no treat at all) increased both their self-reported agreeableness and helpful behaviors. The connection between sweet taste and positive feelings towards others has been expanded to include feelings of love and romantic attraction. Chan, Tong, Tan, and Koh (2013) induced feelings of love in their participants by having them write about such experiences, and then asked them to rate a variety of foods (i.e., bitter-sweet chocolate or sweet-sour candy). The induction of feelings of love increased ratings of sweetness. Similarly, Ren, Tan, Arriaga, and Chan (2014) found that participants exposed to a sweet taste evaluated a hypothesized romantic relationship more favorably than participants not exposed to that taste. These authors also demonstrated that, compared to controls, participants who drank a sweet drink (Sprite or 7-Up) reported a significantly higher interest in initiating a romantic relationship. This may also relate to the terms people use to address those they feel intimate with (are sweet on): honey, sugar, sweetheart, sweetie, etc. Bitterness, on the other hand, has been associated with hostility. Sagioglu and Greitemeyer (2014) compared participants who consumed a bitter beverage with a control group who consumed a non-bitter drink. The experimental group had higher self-reports of hostility, aggressive affect, and behavior than the control group. This exploding filed is reviewed by Weinberger & Stoycheva (2020).
How Does the Mind Work?: Massive Modularity, Connectionism (PDP), and Neural Reuse
A great deal of exciting, even revolutionary, work relating to unconscious processing is taking place in cognitive neuroscience modeling of the architecture of the mind/brain. There are three main models: Massive Modularity, Connectionism (the best-known is parallel distributed processing – PDP), and Neural Reuse. What they have in common is that they are all modeled on the brain rather than a computer, assume parallel processing, and refer to associative networks in the mind/brain. Beyond that, they differ.
Massive Modularity
This is the model most people associate with the mind/brain. Massive modularity avers that the brain/mind is mostly, if not entirely, modular (e.g., Carruthers, 2006, Kurzban, 2010, Pinker, 2005). The brain/mind is a collection of independent units, composed of specific neurological networks that evolved to solve specific adaptational problems our ancestors faced during the Pleistocene epoch (about two million to about 12,000 years ago—the period of the environment of evolutionary adaptedness for humans). Steven Pinker (1997) wrote a bestselling and very well received book entitled “How the Mind Works” that identified three main themes underlying massive modularity. The first is that thinking is a form of computation. Computation refers to unique ways of processing information that follow its own rules and procedures (cf. Pinker, 2005). The second is that the brain/mind is specialized. The third is that the brain/mind evolved during the Pleistocene epoch. Massive modularity posits a multitude of independent and/or quasi-independent operating subsystems or modules. Like all current models of the brain/mind, massive modularity posits parallel rather than serial processing so that many modules operate simultaneously. Supporting this model are fMRI studies that seem to show local functioning when a cognitive process is occurring (Carruthers, 2006). Additionally, there are data from brain damage studies that seem to show that injury to a particular part of the brain affects specific functions without affecting others (cf. Carruthers, 2006; Laws, Adlington, Moreno-Martinez, Javier & Gale, 2010; Pinker, 1998). There are also simulation models of a modular brain that seem to work. Probably the most well-known and sophisticated is ACT-R (Adaptive Control of Thought-Rational). There have been several iterations of this model, the most recent I am aware of is ACT-R 6.0 (J. R. Anderson, 2007).
Since each module processes information in its own unique way, coming to its own conclusions, it is not only possible but inevitable that we can hold two inconsistent, even contradictory, beliefs or feelings about an issue. We can and do behave inconsistently. We can and do behave one way and profess, honestly, to feel and believe another. And, since most functioning is necessarily unconscious, we have no awareness, let alone insight, into these contradictions. This is one of the main points about massive modularity made by Kurzban (2010) in his book “Why Everyone (Else) is a Hypocrite.”
Parallel Distributed Processing (PDP)
Rumelhart, McClelland & the PDP Research Group (1986, vols. 1 & 2) created a model of information-processing based on the functioning and interconnections of the basic units of the brain, neurons. They termed their model parallel distributed processing (PDP) because they posited a myriad of processes occurring simultaneously throughout the brain/mind. Whereas massive modularity posits many innate and specialized structures localized within the mind/brain, PDP posits more generalized and smaller unit processing, distributed throughout the brain. The model has exploded in the literature (see Cognitive Science, 2014, Volume 38, Issue 6) and is very influential (Rogers & McClelland, 2014).
Rumelhart et al. (1986, Chapter 2) described the basic PDP model. There must be a large set of processing units patterned after the neuron but not necessarily identical to it. It would take too much to build all PDP models from individual neurons so functional units, at a somewhat higher level than neurons, are posited instead. These can be considered groups of neurons that inevitably fire together. No unit is tied exclusively to any particular representation. Each unit can be involved in representing many different entities. Similarly, the system can employ many different units in each representation it creates. Specificity is gained through the pattern of units activated rather than by any specialized unit like a module. This allows for a great deal of flexibility.
When two (or however many) units are simultaneously activated, the activation of one is more likely to lead to the activation of the other on subsequent occasions i.e., their activations become linked (McClelland et al., 1986, Chapter 1). The probability of one unit activating another increases every time they are excited roughly contemporaneously. In the parlance of PDP models, there is an increase in their activation weight: the probability of one unit being activated, given the activation of another unit. This is sometimes referred to as the strength of the connection. The activation weight or connection strength changes minimally with each co-activation but builds up as co-activation recurs. If such activations occur frequently enough, the activation weight or connection strength can increase all the way up to certainty (a probability of 1.0). This is the PDP understanding of learning. It flows naturally into implicit learning (cf. Cleeremans, 2014; Cleeremans & Dienes, 2008; Rogers & McClelland, 2014). The strength of the connection is analogous to the degree of the learning. The greater the connection strength, the better the learning. Moreover, this is an automatic, unconscious neurophysiological phenomenon.
PDP posits that certain units are unlikely to activate in tandem because they inhibit one another. Other units are almost certain to co-activate because their connection weights are highly positive. And there is everything in-between. Mutually inhibitory and excitatory connections limit the possible patterns of activation. In the parlance of PDP, they constrain the possible conclusions a PDP system can reach or the solutions it can arrive at. They are therefore termed constraints (cf. Westen & Gabbard, 2002). External stimulation and internal states of the system provide further constraints by encouraging the activation of some and inhibiting the activation of other units. Each unit in the system may be conceptualized as a minipremise that, if true, increases the likelihood of some other minipremises also being true while decreasing the likelihood of other minipremises being true. Minipremises likely to covary have positive connections weights; those likely to be false if the original minipremise is true have negative connection weights.
An example may make this clearer. Assume that an external stimulus has impinged upon someone’s visual sense receptors and activated some PDP units. Activation of these units will then change the probability that other units will be activated. These units, in turn, will affect other units and so on throughout the system. Other environmental events, recent history, and contemporaneous internal states will add to this mix by also activating and inhibiting some units. The probability of each of these units becoming activated or inhibited will be a function of their various connection weights. The PDP system will sort it all out by iteratively seeking to satisfy all of the constraints it encounters (Rumelhart & McClelland, 1986, Chapter 4). A coherent pattern of activation emerges; the system is said to “settle” into a solution.
The solution the system settles into can be thought of as the intersection of the units that remain active when the excitatory and inhibitory activity has run its course (Hinton et al. 1986, Chapter 3). The overall pattern of stably active units captures what the “settled” system is representing. This can be a perception, a memory, a wish, a behavior, etc. (Rumelhart et al., 1986, Chapter 2). In order to reach this steady state of activation in one-second or less, which is the time-period required for most human operations, the system can only cycle about 100 times (the 100-step rule). This means that many of the constraints the system encounters must be satisfied simultaneously (Rumelhart et al., 1986).
Constraints in the human environment do not complement one another neatly and perfectly so as to allow for a single unambiguous solution. The system must therefore settle on a solution that satisfies as many constraints as possible. It must try to achieve the best match possible at a particular point in time (McClelland et al., 1986, Chapter 1). The better the match, the more stable the equilibrium reached by the system when it settles (Norman, 1986, Chapter 26). The match is never perfect. The best match is the one that violates the microinferences represented by the constraints less than alternative matches (Hinton et al., 1986, Chapter 3). It is a kind of compromise.
The lack of perfect matches in the settling of a PDP system suggests other implications rarely addressed by PDP theorists. These may be more far-reaching and systemic than those offered above (cf. Weinberger & Stoycheva, 2020.) Constraints may not simply fail to fit perfectly; they can, in principle, be orthogonal, contradictory, and even oppositional. This means that conflict is an inherent part of PDP. PDP will try to satisfy as many constraints as possible, blending them together as best the system can (Cleeremans, 2014; Maia & Cleeremans, 2005). Sometimes, this will work well but sometimes no solution will fit very well. Compromises may be minor but sometimes major compromises may be required. If some constraints are simply ignored or seriously bent to fit the compromise, then distortion of the event can result.
A stable equilibrium is never fully reached by the system because we do not live in a static world. The system is constantly in flux and only relatively stable for brief periods. This means that the system must be able to tolerate chronic ambiguity. All of these events occur unconsciously. We are unaware of conflicting constraints, relative goodness of compromise, or the instability and ambiguity of equilibrium.
Most of the work in PDP has, so far, concerned perception, memory, and language. Mild distortions and compromises in such processes are theoretically meaningful but practically trivial. In real-world functioning however, people have goals, concerns, affects, moods, and motives. They also have relationships with other people. The situations they encounter are complex, often ambiguous, and personally meaningful. In PDP terms, all of these should function as constraints that influence the solution the system settles into. Smith (1996) made a similar point but did not follow it with a discussion of the implications. Westen and Gabbard (2002) did however and, in doing so, showed that PDP can be considered a psychodynamic model of the mind. It has conflict and compromise built in and is never static. Moreover, virtually all of its operations occur unconsciously. Westen and Gabbard (2002) also made a connection between PDP and psychoanalysis. If we assume unconscious affective and motivational constraints, if we also assume that such constraints are at least as powerful as perceptual and environmental constraints, many psychoanalytic tenets flow naturally from PDP principles. These include defense, symptomatology, transference, and the ameliorative effects of psychotherapy. Defense and symptomatology are simply compromises between affective and environmental constraints. As discussed above, constraints may not allow for good fits so that the compromise the system settles into may not be optimally adaptive.
Neural Reuse
The building blocks in neural reuse are neural circuits that evolved to perform a specific function. Although they initially evolved perform that one function, they are exapted by the same evolution or by normal development and put to additional uses. Thus, the same neural circuits can serve multiple roles (neural reuse). These multiple use circuits are termed “workings.” Workings are set, immutable, and have a fixed anatomical location as in massive modularity. Unlike massive modularity but similarly to PDP, a working is not limited to this function. It can be involved in (used for) other functions too. The term “use” refers to a high level operation to which the working can makes a contribution. A use can be localized or spread out throughout the brain/mind depending upon which workings contribute to it. Many workings can be and are combined to create new uses over evolutionary or developmental time (M. L. Anderson, 2010, 2014).
The neural reuse model occupies an intermediate position between massive modularity and PDP. Neural reuse agrees with PDP that the brain/mind is a distributed network but posits more a priori organization to that network than does PDP. The fact that the operations of the workings do not change means that there are limits to the uses they can be put to and to the operation of said uses. So the system is not as plastic as a connectionist (PDP) mind/brain. Neural reuse agrees with massive modularity that there is an a priori organization to the mind/brain. But the functioning of the neural reuse mind/brain is more flexible than is hypothesized by massive modularity because its a priori units are lower level and because of the many ways that they can be reused and combined.
Neural reuse conceptions of exaptation, workings and uses lead to several predictions. First, high level operations should be largely comprised of combinations of low level neural circuitry, rather than independently evolved modules. Next, a typical brain region should support (be exapted for) many brain/mind functions, across many task categories. The individual workings would perform a similar operation in each of the different functions (uses) of which they are a part. What would differentiate these higher level functions from one another would be the combination of workings that make up each (use).
Support for Neural Reuse
Prinz (2006) questioned the localization assumed by massive modularity by citing evidence showing that functions are distributed throughout the brain and that areas often cited as performing a unique function actually perform many. In addition, M. L. Anderson (2008) looked at co-activation patterns in the brain (which brain regions were likely to act together under which task conditions) and found that different tasks were characterized by different patterns of co-activation among the regions of the brain. Additionally, there is cognitive interference between language and motor control (Glenberg & Kaschak, 2002) and between memory and audition (Baddeley & Hitch, 1974), suggesting that these apparently unrelated functions have some neural components in common such that when a working or set of workings is activated in one of them, it is harder to do the other because that working is already “occupied.” There is also supportive evidence showing facilitation. Glenberg, Brown, and Levin (2007) reported that manipulating objects can aid in reading comprehension, suggesting some underlying neuronal connection between these two apparently independent operations. M. L. Anderson, Kinnison, and Pessoa (2013) provide further supportive data. There are also data that support neural reuse over PDP. PDP would predict experience based cross-cultural and individual differences in the neurophysiological locations of many acquired operations. Instead there is cross-cultural and person to person invariance, as neural reuse would predict.
The most impressive evidence favoring neural reuse is contained in the Neuro-Image based Co-Activation Matrix (NICAM) database (http://www.agcognition.org/projects.html). This is a project organized and maintained by M.L. Anderson and Chaovalitwongse that compiles fMRI studies and applies data mining and graph theory to investigate functional cooperation between brain regions (Anderson, Brumbaugh, & Suben, 2010). As of 2010, it contained 2,603 studies from 824 journal articles (Anderson, 2010). The analytic strategy is to subtract whole brain activity generated by an experimental task from whole brain activity assessed during a control task so that whatever the two tasks have in common gets subtracted out. What is left represents the brain region(s) that uniquely underlie(s) the task. Results indicate that regions of the brain tend to be reused across tasks, as predicted by neural reuse.
Other neural reuse models, specifically the neuronal recycling model (Dehaene & Cohen, 2007) and the neural exploitation hypothesis of (Gallese, 2007; Gallese & Lakoff, 2005) add to the viability of neural reuse by filling in gaps in M. L. Anderson’s version of neural reuse. Neuronal recycling adds the role of development and experience by studying processes that allow people to acquire capacities that could not have been the direct result of evolution because they emerged too recently for evolution to have generated neural circuits specialized for them (e.g., reading and writing). Therefore the brain structures that support them must be assigned and/or shaped during individual development. These cultural inventions make use of evolutionarily older brain circuits to support new skills but necessarily inherit many of their constraints. This has epistemological implications. Our knowledge and ability to obtain knowledge depend upon the tractability (or lack thereof) of these reused circuits.
The neuronal exploitation model (Gallese, 2007; Gallese & Lakoff, 2005) emphasizes and provides an explanation for metaphor and embodied cognition. This model avers that our thoughts are largely tied to sensory and motor circuits (these are the workings). That is, higher order cognitive processes are not disembodied arbitrary symbols but are dependent upon bodily functions and so are literally embodied. We evolved from sensing and moving to thinking, planning, and communicating through language (cf. Kiverstein, 2010). A great deal of data support this idea of exaptation of sensory and motor neurons to higher level cognitive functioning. Damasio and his colleagues showed that when participants were asked to think about verbs, motor circuits in the brain were activated whereas when thinking about nouns, neural circuits dedicated to visual processing fired (Damasio & Tranel, 1993; Damasio et al., 1996; Martin et al., 1995, 1996, 2000). Ekman et al. (1983) reported that anger results in raised skin temperature, elevated blood pressure, and interference with visual perception and fine motor control. Lakoff (1987) related this to emotional metaphors like boiling mad and blind with rage thereby demonstrating that emotional metaphors are related to the physiology of emotions. (Also see Kovecses, 2000, 2002.) Damasio (1996) found that the emotional bodily experiences reported by Ekman et al. can be connected to somatosensory neural circuits. This suggests that emotions may actually consist of the bodily effects reported by Ekman et al. Lakoff (2014) cites and summarizes further supportive studies. These data seem to show that sensorimotor embodiment plays a role in abstract concepts. In this neural reuse model, neural circuitry for both the concrete (physical) and the abstract (mental) are, to a large degree, one and the same. The division between concrete and abstract is not ontological. Instead, it is based on whether the object of study is inside or outside of the organism. Physical objects, their properties, and actions in the world, are outside the person and are therefore seen as concrete. Emotions, metaphors, needs, ideas, and complex cognitions are seen as abstract because they are inside the person. But both are processed the same way. From the point of view of brain processes, there is no difference between inside and outside, between abstract and concrete. There is no difference in how these different experiences are processed. All are embodied in the brain.
Although neural reuse allows for some flexibility in functioning, it also posits some limits. We cannot transcend our workings. Even when we combine them in new ways, they still retain their original function. Thus, the model is more flexible than is massive modularity but not as plastic as is PDP. The concept of workings also supports the idea that much of our cognition, much of our mind, is literally (physically) based on (connected to) workings that originally evolved for physical (e.g., sensory and motor) purposes. Thus, the model accounts for, in fact requires, embodied cognition.
As in PDP, as the person’s experiences grow, certain commonalities between situations get abstracted out. The connections they have in common become strengthened across different situations. This results in generalizations and schemas. The conditions that differentiate disparate experiences are not lost however. They result in discrimination and context effects as some units are strengthened and others not depending upon unique aspects of experience. Finally, since no two situations can be exactly identical, what is abstracted out as general and what is differentiated as unique or contextual are approximations. So concepts and context effects would have what are often called fuzzy boundaries. Since many processes occur at once (parallel processing), not all will be in sync. Some simultaneous processes would be coordinate, some would be orthogonal or irrelevant, and some would be in opposition to one another. So conflict is inevitable (as the massive modularity aspect of the model avers). How would it be resolved? Through settling on the best solution, as the PDP aspect of this model would argue. Such a solution can never be ideal. Thus, compromise is built into the model as a natural outgrowth of the way it functions. Some of the factors that would have to be integrated into any solution would include emotional and motivational processes (Westen & Gabbard, 2002). Additionally, the solution would have to involve implicit learning.
These cognitive neuroscience models all support a psychodynamic model of the mind related to but far from identical with that promoted by psychoanalytic thinking. In their final chapter, Weinberger and Stoycheva (2020) provide a more detailed review of these models, say what they have to teach us about the mind/brain and unconscious processing, as well as relate them to the theory and practice of psychotherapy.
References
Anderson, J. R. (2007). How can the human mind occur in the physical universe?. Oxford,
England: Oxford University Press.
Anderson, M. L. (2008). Circuit sharing and the implementation of intelligent systems.
Connection Science, 20, 239-251.
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain.
Behavioral and Brain Sciences, 33, 245-313.
Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive brain.
Cambridge, MA: MIT Press.
Anderson, M. L., Brumbaugh, J., & Suben, A. (2010). Investigating functional cooperation in
the human brain using simple graph-theoretic methods. In Computational Neuroscience. New York: Springer.
Anderson M. L., Kinnison J, & Pessoa L. (2013). Describing functional diversity of brain regions and brain networks. Neuroimage, 73, 50-8.
Bar-Anan, Y., Wilson, T. D., & Hassin, R. R. (2010). Inaccurate self-knowledge formation as a result of automatic behavior. Journal of Experimental Social Psychology, 46, 884-894.
Bargh, J. A. (1994). The Four Horsemen of automaticity: Awareness, efficiency,
intention, and control in social cognition. In R. S. Wyer, Jr., & T. K. Srull (Eds.), Handbook of social cognition (2nd ed). Hillsdale, NJ: Eribaum. (pp. 1-40).
Bargh, J., Schwader K., Hailey S., Dyer R., & Boothby E. (2012). Automaticity in social-
cognitive processes. Trends in Cognitive Science, 16, 593-605.
Baumeister, R., Bratslavsky, E., Finkenauer, C., & Vohs, K. (2001). Bad is stronger than
good. Review of General Psychology, 5, 323-370.
Bornstein, R. F. & Masling, J. M. (Eds.). (1998). Empirical perspectives on the psychoanalytic unconscious. Washington, DC: American Psychological Association.
Campbell, K. & Sedikides, C. (1999). Self-threat magnifies the self-serving bias: A meta-
analytic integration. Review of General Psychology, 3, 23-43.
Carruthers, P. (2006). The architecture of the mind: Massive modularity and the flexibility of
thought. New York: Oxford University Press.
Chan, K. Q., Tong, E. M., Tan, D. H., & Koh, A. H. Q. (2013). What do love and jealousy taste
like? Emotion, 13, 1142-1149.
Cleeremens, A. (2014). Connecting conscious and unconscious processing. Cognitive Science, 38, 1286-1315.
Cleeremans, A. & Dienes, Z. (2008). Computational models of implicit learning: In R. Sun (Ed.), The Cambridge handbook of computational modeling. Cambridge, England: Cambridge University Press. (pp. 396-421).
Cusimano, C., & Goodwin, G. P. (2019). Lay beliefs about the controllability of everyday mental states. Journal of Experimental Psychology: General. 148, 1701-1732.
Cusimano, C., & Goodwin, G. P. (2020). People judge others to have more voluntary control over beliefs than they themselves do. Journal of Personality and Social Psychology, Advance online publication. http://dx.doi.org/10.1037/pspa0000198.
Damasio, A. R., Everitt, B., & Bishop, D. (1996). The Somatic Marker Hypothesis and the Possible Functions of the Prefrontal Cortex [and Discussion]. Philosophical transactions: Biological sciences, 351(1346). 1413-20.
Damasio, A.R. & Tranel, D. (1993). Nouns and verbs are retrieved with differently distributed neural systems. Proceedings of the National Academy of Sciences of the United States of America, 90, 4857–4960.
Dehaene, S. & Cohen, L. (2007). Cultural recycling of cortical maps. Neuron, 56, 384-398.
Descartes, R. (1642). Meditations on First Philosophy. Broadview Press.
Ekman, P., Levenson, R., & Friesen, W. (1983). Autonomic nervous system activity distinguishes among emotions. Science, 221(4616), 1208-1210.
Freud, S. (1915). The unconscious. In The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XIV. (1914-1916): On the History of the Psycho-Analytic Movement, Papers on Metapsychology and Other Works, (pp. 159-215).
Gallese, V. (2007). Before and below “theory of mind”: embodied simulation and the neural correlates of social cognition. Philosophical Transactions of the Royal Society of London Series B: Biological Science, 362, 359-369.
Gallese, V. & Lakoff, G. (2005). The brain’s concepts: the role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22, 455-479.
Gilbert, D. & Malone, P. (1995). The correspondence bias. Psychological Bulletin, 117, 21-38.
Glenberg, A. M., Brown, M., & Levin, J. (2007). Enhancing comprehension in small reading groups using a manipulation strategy. Contemporary Educational Psychology, 32, 389-399.
Goldiamond, I. (1958). Indicators of perception: I. Subliminal perception, subception, unconscious perception: An analysis in terms of psychophysical indicator methodology. Psychological Bulletin, 55, 373-411.
Hinton, G. E., McClelland, J. L., & Rumelhart, D. E. (1986). Distributed representations. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. I). Cambridge, MA: MIT Press. (pp. 77-109).
Hollender, D. (1986). Semantic activation without conscious identification in dichotic listening, parafoveal vision, and visual masking: A survey and appraisal. Behavioral and Brain Sciences, 9, 1-23
Jones, E. & Harris, V. (1967). The attribution of attitudes. Journal of Experimental Social Psychology, 3, 1-24.
Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus, and Giroux.
Kiverstein, J. (2010). No bootstrapping without semantic inheritance. Behavioral and Brain
Sciences, 33, 279-280.
Klein, D. B. (1977). The unconscious: Invention or discovery? A historico-critical inquiry.Oxford, England: Goodyear Publishing.
Köveces, Z. (2000). Metaphor and emotion: Language, culture, and body in human feeling. Cambridge: Cambridge University Press.
Köveces, Z. (2002). Metaphor: A practical introduction. New York: Oxford University Press.
Kurzban, R (2011). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.
Princeton, New Jersey: Princeton University Press.
Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. Chicago: University of Chicago Press.
Lakoff, G. (2014). The all new don’t think of an elephant! Know your values and frame the
debate. White River Junction, VT: Chelsea Green Publishing.
Landau, M. J., Vess, M., Arndt, J., Rothschild, Z. K., Sullivan, D., & Atchley, R. A. (2010).
Embodied metaphor and the “true” self: Priming entity expansion and protection influences intrinsic self expressions in self-perceptions and interpersonal behavior. Journal of Experimental Social Psychology, 47(1), 79-87.
Laws, K. R., Adlington, R. L., Moreno-Martinez, F. J., & Gale, T. M. (2010). Category-specificity: Evidence for modularity of mind. Hauppauge, NY: Nova Science Publishers.
Lewicki, P. (2013). Nonconscious social information processing. NY: Academic Press.
McClelland, D. C. (1961). The achieving society. New York: Van Nostrand & Company.
McClelland, D. C. (1987). Human motivation. New York: Cambridge University Press.
McClelland, D.C., Koestner, R.F., & Weinberger, J. (1989), How do self attributed and implicit motives differ?. Psychological Review, 96, 690-702.
McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed
processing: In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. I). Cambridge, MA: MIT Press. (pp. 3-44).
Maia, T. V. & Cleeremans, A. (2005). Consciousness: Converging insights from connectionist modeling and neuroscience. Trends in Cognitive Sciences, 9, 397-404.
Martin, A., Haxby, J., Lalonde, F., Wiggs, C., & Ungerleider, L. (1995). Discrete Cortical Regions Associated with Knowledge of Color and Knowledge of Action. Science, 270, 102-105.
Martin, A., Ungerleider, L., & Haxby, J. (2000). Category specificity and the brain: The sensory/motor model of semantic representation of objects. The New Cognitive Neurosciences, 2, 1023-36.
Martin, A., Wiggs, C., Ungerleider, L., & Haxby, J. (1996). Neural correlates of category-specific knowledge. Nature, 379, 649-652.
Meier, B. P., Moeller, S. K., Riemer-Peltz, M., & Robinson, M. D. (2012). Sweet taste
preferences and experiences predict prosocial inferences, personalities, and behaviors. Journal of Personality and Social Psychology, 102, 163-174.
Meier, B. P. & Robinson, M. D. (2006). Does “feeling down” mean seeing down? Depressive symptoms and vertical selective attention. Journal of Research in Personality, 40, 451-461.
Milner, B. (2005). The medial temporal-lobe amnesiac syndrome. Psychiatric Clinics of North America, 28, 599-611.
Nisbett, R. E. & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231-259.
Norman, D. A. (1986). Reflections on cognition and parallel distributed processing. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. II). Cambridge, MA: MIT Press. (Chapter 26, pp. 531-546).
Pinker, S. (1997/2009). How the Mind Works (2009th ed.) . New York, NY: W. W. Norton &
Company.
Pinker, S. (1998). Words and rules. Lingua, 106, 219-242.
Pinker, S. (2005). So how does the mind work?. Mind & Language, 20, 1-24.
Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology: General, 118, 219-235.
Reber, P. J. (2013). The neural basis of implicit learning and memory: A review of neuropsychological and neuroimaging research. Neuropsychologia, 51, 2026-2042.
Ren, D., Tan, K., Arriaga, X. B., & Chan, K. Q. (2015). Sweet love: The effects of sweet taste experience on romantic perceptions. Journal of Social and Personal Relationships, 32, 905-921.
Rogers, T. T. & McClelland, J. L. (2014). Parallel distribute processing at 25: Further
explorations in the microstructure of cognition. Cognitive Science, 38, 1024-1077.
Rumelhart, D. E., Hinton, G. E., & McClelland, J. L. (1986). A general framework for parallel distributed processing. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. I). Cambridge, MA: MIT Press. (pp. 45-76).
Rumelhart, D. E. & McClelland, J. L. (1986). PDP models and general issues in cognitive
science. Distributed representations. In D. E. Rumelhart, J. L. McClelland, & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition (Vol. I). Cambridge, MA: MIT Press. (pp. 110-146).
Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel distributed
processing: Explorations in the microstructure of cognition, Volume I: Foundations & Volume II: Psychological and biological models. Cambridge, MA: MIT Press.
Sagioglou, C. & Greitemeyer, T. (2014). Bitter taste causes hostility. Personality and Social Psychology Bulletin, 40, 1589-1597.
Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 501-518.
Schultheiss, O. C. (2013). The hormonal correlates of implicit motives. Social and Personality Compass, 7, 52-65.
Schyns, B. & Hansbrough, T. (2008). Why the brewery ran out of beer. Social
Psychology, 39, 197-203.
Sedikides, C., Campbell, K., Glenn, R, & Elliot, A. (1998). The self-serving bias in
relational context. Journal of Personality and Social Psychology, 74, 378-386.
Silverman, L. H., Lachman, F. & Milich, R. (1982). The Search for Oneness. New York: International. Universities Press.
Silverman, L. H. & Weinberger, J. (1985), Mommy and I are one: Implications for psychotherapy. American Psychologist, 40, 1296-1308.
Smith, C. E. (1992). Motivation and personality: Handbook of thematic content analysis. NewYork: Cambridge University Press.
Smith, E. R. (1996). What do connectionism and social psychology offer each other?. Journal of Personality and Social Psychology, 70, 893-912.
Squire, L. R. (2009). Memory and brain systems: 1969-2009. The Journal of Neuroscience,
29, 12711-12716.
Strack, F. & Deutsch, R. (2015). The duality of everyday life: Dual-process and dual-system models in social psychology. In M. Mikulincer & P. Shaver (Eds.) APA handbook of personality and social psychology, Vol. 1: Attitudes and social cognition. American Psychological Association. (pp. 891-927).
Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453-458.
Weinberger, J., Chassman, E., & Delgado, B. (in press). Clinical implications of implicit and explicit motives. In Weinberger, J., Purcell, A., & Knafo, G. The affiliation motive is complicated: So what else is new? In O. Schultheiss & J. Brunstein (Eds.), Implicit Motives. Oxford University Press.
Weinberger, J., & Stoycheva, V. (2020). The unconscious: Theory, Research, and Clinical Implications. NY: Guilford.
Weiner, B. (1986). An attributional theory of motivation and emotion. New York: Springer Verlag.
Westen, D. & Gabbard, G. (2002). Developments in cognitive neuroscience: II. Implications for theories of transference. Journal of the American Psychoanalytic Association, 50, 648-655.
Whyte, L. (1960). The unconscious before Freud. Basic Books.
Williams, L. & Bargh, J. (2008a). Experiencing personal warmth promotes personal warmth. Science, 322, 606-607.
Williams, L. & Bargh, J. (2008b). Keeping one’s distance: The influence of spatial distance cues on affect and evaluation. Psychological Science, 19, 302-308.
Winkielman, P., Niedenthal, P., Wielgosz, J., Eelen, J., Kavanagh, L. C. (2015). Embodiment of cognition and emotion. APA handbook of personality and social psychology 1, 151-175.
Winter, D. G. (2005). Things I’ve learned about personality from studying political leaders at a distance. Journal of Personality, 73, 557–584.
Zhong, C. B. & Leonardelli, G. (2008). Cold and lonely: Does social exclusion literally feel cold? Psychological Science, 19, 838-842.
