Social Cognition: Part 1

“People don’t just receive external information; they also process it and become architects of their own social environment” – Markus and Zajonc (1985)
Imagine yourself in a new environment full of people you’ve never met. For most, this is a stressful situation. Interacting with people without knowing how they see and interpret the world carries a fair degree of uncertainty, and therefore, vigilance.

When we encounter new information, be it people or otherwise, we do a few things. We interpret the information, construct theories around it, and eventually make judgments based on prior experience and our own concepts of how others are likely to think.

Most of this happens unconsciously and automatically, and is the foundation of social cognition. When thinking about the world around us, we are highly influenced by our heuristics and the way in which we are socialised.

The looking-glass self, popular in psychology since the early 1900s, is the most basic way of looking at this process. The theory is that each of us derives our identities from how we perceive others to see us. This occurs due to theory of mind, a cognitive process unique to humans.

Simply speaking: we change how we behave based on the people and environment around us.


Yeung and Martin present the theory in three stages: thinking about how we appear to others, reacting to their judgment of our appearance, and modifying our behaviour in response to those judgments. The second stage is the most interesting, as we can never be certain of how others are actually judging us (if at all).

We just make best guesses based on the context.

However, these guesses are often less than accurate. Humans are notorious cognitive misers, and are biased towards choosing the path of least resistance when it comes to forming judgments and theories.

In classical psychology, the model of the naïve scientist was one way of examining these cognitive shortcomings. The naïve scientist, in thinking about the world, seeks to create balance and control the environment around them.

Any new situation is interpreted by an individual through the lens of a scientist (albeit a naïve one) based on observations and common sense.

But ‘common sense’ is actually pretty subjective. If a situation does not fit our internal model of the world, it creates dissonance, and as recent research has shown, we engage in all manner of cognitive leapfrogging in order to reduce it.

Research into the naïve scientist theory has revealed a wealth of biases and heuristics that we employ daily, and often unconsciously, to circumvent unfamiliar situations.

The argument over how many of these biases are evolutionary is ongoing, but there is solid evidence to support that many developed in order to reduce cognitive load and generally make our lives easier. While these may have been very effective in our hunter-gatherer days, some are less than beneficial in the modern world.

Think of pattern-seeking (or categorisation heuristics). We have evolved over millennia to seek out patterns, as recognising them has proved far more beneficial than not over our evolutionary lifespan.


Even today, it does less harm to see a threat when there is none, than to not see a threat when it is lurking just around the corner.

For instance, if you hear a loud sound go off behind you, you will turn quickly to see what it is. The more it goes off, however, the more you will habituate to it, as you have categorised it as non-threatening. If we didn’t have this heuristic, we would constantly be on-edge even when the environment free of threats.

But categorisation heuristics also mean we see patterns that are not there, or group things into narrow categories even when broad categories are more logical. We do this for the sake of easy remembrance, as the more categories we attach something to, the easier it becomes to recall. This is the nodal network model of memory; a popular theory to this day.

The conjunction fallacy is a great example of a categorisation heuristic. Consider the following put forth by Tversky and Kahneman:


Because of our need to categorise, option 2 seems to make sense. Linda is interested in social justice, so why wouldn’t she be a feminist?  

Option 1 is the correct answer simply because there are more bank tellers in the world than there are bank tellers who are also feminists. Here, our preconceptions of how ‘representative’ Linda is to the options given influences our ability to categorise in a rational manner.

All of these ways of thinking – the looking-glass self, the naïve scientist, categorisation heuristics – boil down to motivated reasoning.

Every interpretation we make is done so to achieve the same goal: to interpret the world in a way that makes sense to us, reaffirming what we already believe. With motivated reasoning, we actively reduce the dissonance that occurs when things don’t make sense.

And as previously stated, what ‘makes sense’ to one person does not necessarily make sense to the next. For some, even objective truths don’t make much sense, which is why clinging to false beliefs is such an effective comfort strategy for those whose beliefs don't match the reality.

Next time: Intergroup relations and latitudes of judgment.

a.ce 

No comments:

Post a Comment