Social Evolution Forum

In his target article Whitehouse describes a fascinating and extremely worthwhile program of research. We understand that this research is in its early stages, and so we are not too concerned that at the moment, his exposition of it raises many more questions for us than it answers. We offer up these questions, not really as criticisms, but more to help him communicate the value of his project by attempting to answer them in the future.

1. How prevalent is identity fusion?
The concept of identity fusion is introduced without any data (either here or – less forgivably – in the fuller treatment of the concept by Swann, Jetten, Gómez, Whitehouse, & Bastian, 2012) on how common a phenomenon it is, whether it takes place equally in men and women, the age at which it first takes place, etc. Without such data it is impossible to draw any conclusions on…

View original post 1,444 more words

Advertisement
Posted in Uncategorized | Leave a comment

Cultural Group Selection in Phase Transition

Made a comment on Peter Turchin’s blog post about cultural group selection. I think he has done some fascinating work, and I am quite envious of the people at the Frankfurt meeting (some of whom are my friends) who got to meet him!

Social Evolution Forum

I am writing this in Frankfurt, where we have just concluded a week-long meeting on cultural evolution. I was hoping to write about it earlier, but this meeting has been so intense that I literally could not find a couple of hours to put my impressions on paper (or computer screen). The meeting was organized by Strüngmann Forum. There are no talks. Instead some participants write position papers that serve as a basis for discussions (mine was on the evolutionary transition from small-scale to large-scale societies, naturally). During a typical conference there are always talks that are less interesting, and that gives one the opportunity to write something, but not in this one.

Most discussions were within four subgroups, meeting separately, although there also was plenty of opportunity to attend other groups. My group focused on the evolution of small-scale and large-scale societies in humans. We had a developmental psychologist…

View original post 622 more words

Posted in Uncategorized | 2 Comments

The extended evolutionary synthesis and the role of soft inheritance in evolution

Last month, Tom Dickins and Qazi Rahman published a provocative review article with the above title in Proc R Soc B (hat tip Emma Cohen):

http://rspb.royalsocietypublishing.org/content/early/2012/05/10/rspb.2012.0273.abstract?papetoc

I made a rather longwinded comment about the article on Emma’s Facebook page, which I thought it might be better to post here…

From a quick skim of the paper I find it a bit polemical. The argument is persuasive as far as it goes. I like the idea of “developmental calibration” (neat term). But the abstract and introduction seem much more ambitious than what they actually show in the paper. I just don’t see how demonstrating that epigenetic effects and learning biases can be adaptive in terms of an individual’s inclusive fitness invalidates the whole idea of “soft inheritance”.

Two observations:

(i) The article is really about epigenetics rather than soft inheritance more generally. They highlight niche construction in the abstract and introduction, but only use the term once after that, in their discussion of Bolhuis et al. in Section 4. Unless I missed it, nowhere do they directly address the key insight of niche constructionism, which is that genes can causally influence the environment as well as vice versa. The implication of this insight is that natural selection is a dynamic system: the idea that information flows only downstream, from environment to genotype to phenotype, is a useful fiction that (like Newtonian physics) may approximate 99% of reality, but breaks down under extreme circumstances – i.e. when animals evolve complex enough behaviour to change their environment radically. In these circumstances, does it make sense any more to speak of “maximising fitness”, since fitness is always relative to a given environment? Is the ability to change one’s environment part of one’s fitness? I would think it makes more sense to see things in terms of a fitness landscape with attractors dotted around it. I have no idea how to flesh that out theoretically, but I get the sense that the modern synthesis does break down a bit there.

(An aside about fitness: To be honest, the phrase “maximising fitness” gets my hackles up at the best of times. I tend to think of evolution as a deeply historical, stochastic process, and this phrase just seems like an abstraction too far to me. I don’t think I’m the only one… How does one know what the maximum fitness is in a given environment and for given developmental constraints? And as Emma pointed out, the maximum will vary wildly as the environment varies anyway. So maybe the phrase makes as much sense with niche construction as it ever does. I guess my intuition is just that allowing genes to feed back into the environment makes the system truly dynamic. And there is an important difference between viewing behaviour as an extended phenotype – as I suppose Dickins & Rahman do – and viewing it as niche construction, which is that niche construction affects all members of a community, whereas conceptually, an extended phenotype is likely to be seen as affecting only the bearers of a particular genotype. Maybe that distinction needs to be highlighted more, I don’t know: I’ve never seen it made before, but then I feel a bit out of my depth here … Sometimes I wish I had a biology degree!)

(ii) Section 5 mainly talks about rats. Despite the fact that they criticise Bolhuis et al.’s argument about human brain evolution, nowhere do they acknowledge the point that a human mind is qualitatively different from a rat mind. As Dawkins noted back in the 70s (and probably others before him) humans are different because we have symbolic cultural systems (e.g. language) that are replicated by exact imitation. This is a parallel evolutionary system, a form of “soft inheritance” that must interact with “hard inheritance” in conceptually very problematic ways. Yes, niche construction might be a footnote if we looked only at the rest of the animal kingdom, but one of the most exciting things about the idea – for me at least! – is that it provides a really nice way to think about how human culture may on the one hand be rooted in non-human social learning – since other animals practise ‘primitive’ forms of niche construction – while on the other hand taking things to a whole new level in terms of our ability to impact our environment. Dickins & Rahman’s focus on epigenetics, in this article, leaves that appeal completely intact, in my mind.

Really interested  in getting a proper discussion going about this, particularly from people who have the biological training that I lack!

Posted in recent studies, Uncategorized | Tagged , , | 4 Comments

From Tattling to Gossip: The Evolution and Development of Indirect Aggression

Just had an abstract proposal accepted for a special issue of Evolutionary Psychology on “Evolutionary Developmental Psychology” – see http://www.epjournal.net/special/call-for-papers-evolutionary-developmental-psychology/. It still has to go through peer review (submission deadline is 1st September) but I’m very excited about this because (a) it combines my main teaching interests next year (I’m doing a module on dev psych and an optional one on evol psych), and (b) it means I can integrate my PhD results on preschoolers’ tattling with my postdoctoral work on preadolescents’ conflicts, using my own theoretical framework.

From Tattling to Gossip: The Evolution and Development of Indirect Aggression

Adult humans are characterized by remarkably low rates of intra-group physical aggression, relative to other primates and social carnivores, contributing to our ability to live in large cooperative groups. A key ultimate mechanism supporting this adaptation is indirect reciprocity, and a key proximate mechanism relating to this is indirect aggression: the diversion of aggressive impulses into verbal attacks on someone’s reputation, made to a third party. In this article I trace the developmental processes by which aggressive impulses are trained into increasingly indirect pathways. Two major transitions are postulated: firstly during early childhood, when early forms of indirect aggression appear and direct aggression becomes increasingly inhibited; and secondly during early adolescence, when conceptions of social identity change and overt reporting of offences to authority figures outside the peer group becomes less desirable.
From the age of 2–3, children show a tendency to tattle: they overtly report normative transgressions by puppets in experimental settings, by siblings at home, and by peers at preschool. Tattling correlates with standard measures of indirect aggression, and it is noteworthy that measures of ‘indirect’ aggression among preschoolers focus on overt verbalizations, such as threatening not to invite another child to one’s birthday party. As children grow older, they gradually tattle less, and eventually judge it as appropriate only for serious transgressions: in adolescence, those who tattle may be socially derogated, just as adult whistleblowers are ostracized by their in-groups. Measures of indirect aggression among adolescents focus on more covert behaviour such as negative gossip. I argue that this is because adolescence is associated with a realignment of social identity, caused ultimately by new ontogenetic adaptations for mate selection. As children grow older, building reputation within the peer group becomes more important, and relying on adult intervention is no longer an adaptive strategy.

Posted in publications | Tagged , , , , | Leave a comment

EHBEA abstract

It turns out I can’t make it to EHBEA 2012, so I thought I’d post the abstract of the talk I was going to give. Please contact me if you’re interested in finding out more about this study 🙂

Explaining gender differences in children’s accounts of interpersonal conflict: Evidence from three European countries

Gordon P. D. Ingram, Interactions Lab, University of Bath

Objectives

Evolutionary theory predicts several gender differences in interpersonal conflict, which should be present at least from early adolescence. It was investigated whether these were present in preadolescent children’s own accounts of conflicts that they had experienced. Gender differences were predicted in the frequency of reports of physically aggressive responses, anger, sadness, relationship-based competition, skill-based competition, and reconciliation. Since these differences were predicted to be culturally invariant, children from three different countries were assessed.

Methods

132 children (aged 9­­–12) in the UK, Portugal and Greece were interviewed about conflicts that they had personally experienced. Interviews were fully transcribed and coded for the six dependent variables listed above. Aggressiveness, victimization frequency and vocabulary level were controlled for using standard questionnaires, and entered as covariates in a logistic regression.

Results

Chi-square analyses indicated that for children from all countries, as predicted, girls were less likely to respond to conflict with physical aggression, more likely to engage in conflicts over friendship alliances, and less likely to engage in conflict over formal sports or games. Predicted differences in anger, sadness and reconciliation were supported only for UK children. Logistic regression showed that the gender effect on physically aggressive responses was mediated by general aggressiveness. Other effects were present even when including control variables.

Conclusions

The gender effect on physical aggression has a strong evolutionary rationale, since remaining free from injury is important for rearing offspring. Differences in skill-based and relationship-based competition were weaker and more variable, suggesting that they included components of culture and/or personality. Differences in the emotions aroused by conflict and in the ability to resolve conflicts effectively seemed highly culturally specific. 

 

Posted in conference-talks | Tagged , , , | Leave a comment

Infants’ sense of inequality

Inequality has been much in the news recently with the Occupy protests, with which I have a great deal of sympathy:

It has often been argued that people (and even capuchin monkeys!) have an innate sense of fairness, in that they prefer resources to be distributed equally rather than unequally in experimental situations. Jonah Lehrer at Wired Science goes over some of the science behind this in a good recent blog post. But he doesn’t include any developmental evidence. Two new studies reveal that by 15 months, infants are already sensitive to how resources are distributed.

In the first study, published in Developmental Science by Alessandra Geraci and Luca Surian from the University of Trento, 12-18-month-old infants watched two animated characters taking turns to distribute two tokens to two other characters: either one token each, or both to one character. Subsequently, infants looked longer at animations where a chicken (which had observed the distribution of the tokens) approached the “fair” distributor than at animations where they approached the “unfair” one. The authors interpret this as implying that the infants preferred this scenario (approaching the fair distributor) – though it has to be pointed out that if the reverse pattern had been found, they might have interpreted this as showing surprise that the chicken would approach the unfair distributor (a common failing of this sort of “preferential looking” paradigm!). A more solid finding (in my view) was that when given pictures of the two distributors’ characters to play with, infants preferred to take the picture of the fair distributor.

A similar experimental design with similarly aged infants, recently published in PLoS One, obtained rather orthogonal results. Marco Schmidt (of the Max Planck Institute for Evolutionary Anthropology) and Jessica Sommerville (of the University of Washington) also showed 15-month-old infants a video of a character doling out equal or unequal amounts of milk or cookies to two other characters (see diagran below):

This time, the infants looked longer (on average) at the “unfair” outcome, which (the authors argue) implies that they were surprised by it (see what I mean about the selective interpretation of preferential looking results). Geraci and Surian, by contrast, had found no difference in looking times between the fair and unfair outcomes – just in looking times for the chicken’s approach to the other characters. Unfortunately, Schmidt and Sommerville did not test whether children preferred to play with the fair or the unfair distributor. But they did look at the children’s own sharing behaviour, in terms of whether they gave a play partner a preferred or a less preferred toy. They found that those children who looked longer at the unfair outcome (i.e., were more surprised by it) were more likely to give their partner their preferred toy; while those who looked longer at the fair outcome were more likely to hand over the less appealing toy. This is a nice little addition to the experiment, since it hints at how personality differences affect attitudes to fairness, even so early in life (I complained in a previous post about how psychologists too often interpret quite small mean differences between groups/trials, driven by a minority of participants, as indicating universal psychological principles).

So what can we say from all this about infants’ sense of fairness? Both pairs of authors argue that their results show that a concern with equality starts very early – much earlier than in the classical theories of Piaget and Kohlberg (who thought that it only appeared with the onset of middle childhood, around age 6-7). But taken together, their results are less supportive of this argument than when considered separately. The problem is that if infants looked longer at the unfair outcome in Schmidt & Sommerville’s experiment because they were surprised by it, then surely in Geraci & Surian’s experiment their longer looking times when the chicken approached the fair distributor should also indicate surprise.

Moreover, Schmidt & Sommerville undermine the first part of their experiment when they highlight, in the second part, that almost half of their participants looked longer at the fair outcome! Such a pattern of results hardly supports a simplistic interpretation in terms of a general preference for fairness. The strongest finding from these two experiments is perhaps that 14 out of 20 16-month-old infants in Geraci & Surian’s study preferred to play with the fair distributor (3 preferred the unfair distributor and 3 failed to choose). As I say, it was a pity that this was not replicated in the other experiment, as manual choice seems a more reliable way of assessing preference than looking times.

So while I would love to say that infants have an innate preference for fair distribution of resources, the evidence is just not conclusive at this stage. The most that we can say is that infants are sensitive to the proportions in which resources are distributed (at 15 months but not at 10 months). The theories of Piaget and Kohlberg that true norms of equality do not arise until a few years later may be more intact than the authors imply in their Discussion sections.

Posted in recent studies, Uncategorized | Tagged , , , | Leave a comment

Does gossip really make people less likeable?

Gossip
Photograph by kamshots

For anyone interested in gossip, there was a little study published by Sally Farley a few days ago in the European Journal of Social Psychology (here are links to the original article and the blog post where I found it). She divided her student participants into groups, and asked each group to think about someone they knew who either talked about absent people a lot or infrequently, and who said either positive or negative things about them (the word gossip was not used directly, because of the negative connotations associated with this word). She then gave the participants questionnaires to indicate how much they liked that person, and how much social power they thought they had.

The headline finding was that people thought that those who gossip frequently are less likeable, and have less social power, than those who gossip infrequently. Actually, though, there was a bit of spin here (sorry to rant on about this sort of thing again after my last post, but it is annoying how often academics will spin some rather unexciting results into something that looks more interesting).

While it is true that an ANOVA revealed a main effect of gossip frequency, if you look at Tables 1 and 2 in the article (Table 2 is reproduced below) it is clear that this effect was driven solely by one of the four conditions – one in which the imagined acquaintance produced a high frequency of negative gossip. The high-frequency, positive gossip condition was almost identical for both power and likeability scores to the low-frequency, positive gossip condition, and even slightly (though insignificantly) higher than the low-frequency, negative gossip condition:

Gossip frequency
High Low
Gossip valence M SD M SD
Positive 48.44 8.37 48.63 8.27
Negative 37.09 10.59 46.03 10.08

Table 2 (from Farley, 2011). Mean liking ratings as a function of gossip valence and gossip frequency

The problem for Farley is that reading that “people who say nasty things behind others’ backs are disliked” is not nearly as exciting as reading that “people who gossip are disliked” – so she naturally puts the emphasis on the main effect rather than on a more fine-grained, post hoc analysis. This would be interesting because it seems to fly in the face of popular theories such as those of my former colleague Robin Dunbar (whose book, Grooming, Gossip, and the Evolution of Language (2004) is well worth a read, for those who have not yet come across it) which hold that gossip is a kind of “relational glue” that helps to hold human societies together.

To be fair, she had also hypothesised in this study that people who frequently spread positive gossip would be more liked than people who infrequently did so, so she is legitimately able to claim that this hypothesis has been refuted by her data. This prediction was based on observational studies that showed that people who occupy a strong position in social networks tend to do a lot of gossip (in line with Dunbar’s theory). The trouble is, I am not sure that her methodology is as well placed as observational methods to refute this idea. Apart from the obvious possible influence of norms against gossip in an interview scenario (which I don’t think simply not mentioning the word gossip would entirely obviate), this is mainly because she asked participants to imagine “people who spent a lot of time talking about other people when they (were) not around”. Pragmatically, this seems to indicate people who spend an abnormal amount of time talking about other people, and abnormal tendencies rarely attract positive judgements. It also perhaps indicates people who talk about people behind their backs relatively more than they talk about them to their face – hardly a quality associated with social power or liveability.

It would have been nice, therefore, to include some sort of control conditions: perhaps considering people who talk “a lot” about something asocial, like a hobby (are they seen as boring, and therefore unlikeable?); or people who criticise other people to their face as well as behind their backs (this may be what a lot of socially powerful people are really like); or people who spread a lot of neutral, objective gossip (those who are central to social networks may spend a lot of time talking about social facts, e.g. who is friends with whom, who has just got married or had a baby, or who is earning lots of money, without necessarily praising or criticising people all the time). Farley would have actually had enough participants to add more conditions, since she originally planned a 2x2x2 between-groups ANOVA (i.e. 8 different conditions), varying target gender (the gender of the imagined person) as well as the frequency and valence of gossip, only to discard target gender because it had no effect. This was a bit of a silly choice for a between-groups condition, though, because she could have just asked participants to imagine both a man and a woman who gossiped a lot/little.

Neither was I convinced by the author’s explanation for why negative gossips were so disliked. She linked this result to the transfer of attitudes recursively (TAR) effect, by which making positive/negative statements about a third party tends to lead an audience to make identically valenced judgements about the speaker. The problem is that her data does not show this, because the frequent positive gossipers do not seem to have had any positive attitudes transferred to them. It seems rather as if participants are (not unreasonably) singling out frequent negative gossip as a reliable indicator of low power/likeability, while not making any particular judgements about the other conditions.

So, on the whole I was not terribly impressed by this little article, I’m afraid; but it is certainly an interesting area and it opened up some empirical ideas for me in terms of testing similar hypotheses in online social networks. Does making positive or negative FB posts make you more or less liked? What about making such posts about public figures who are themselves liked/disliked?

Posted in recent studies | Tagged , , , , | Leave a comment

Imitation and reliability in infants

A recent article by Diane Poulin-Dubois and colleagues at Concordia University is interesting both because it reports on a fascinating area of study (imitation in infants) and because it illustrates several common flaws in experimental psychology. The original article is here and you can read about it in this blog post.

Briefly, Poulin-Dubois et al. primed 14-month-old toddlers with the actions of either a “reliable” or an “unreliable” adult. The reliable adult would look inside a container, which the infants had previously been led to expect might contain a toy. The adult would put on a happy face as on seeing something fun, then hand over the container (in which there was, indeed, a fun toy) to the infant. In the “unreliable” condition, everything was the same except that the container did not contain a toy.

In the second part of the experiment, infants observed the same adult turning on a light switch with their forehead (as in the well-known experiments of Gergely et al., 2002). They were then encouraged to imitate the adult with the words “Now it’s your turn.” Significantly fewer infants imitated the adult model exactly (i.e., using their forehead to turn on the light rather than, more naturally, their hands) in the unreliable condition than in the reliable condition.

First of all, kudos to the authors for a very elegant experimental design, neatly combining the two paradigms of selective learning (as in Paul Harris’s work) and imitation (as in Gergely & Csibra’s work). My issue – as so often in experimental psychology – is not with the design but with the interpretation, which is wildly overblown. I initially thought the title of the blog report I just linked to (“Toddlers Won’t Bother Learning from You if You’re Daft”) might be misrepresenting the authors’ argument, only to find that they make similar claims in the original article (e.g., in both the title, “Infants Prefer to Imitate a Reliable Person”, and the discussion, ” … the same behavior performed by a previously unreliable adult is interpreted as irrational or inefficient, thus not worthy of imitating”).

There are three main flaws with this argument, all of which are common flaws in experimental psychology. First, “reliability” may be too narrow an interpretation of whatever property of the adult’s behaviour is influencing the infant’s behaviour. Put yourself in the toddler’s bootees. In one condition you have an adult who makes nice smily faces and keeps showing you a fun toy; in another, an adult who also makes nice smily faces  but who keeps showing you an empty container. Which one is more fun, and therefore more worthy of attention? In order to isolate “reliability” as the relevant property, one would need two additional control conditions in which neutral faces were used. (If it’s all about reliability, an adult who makes a neutral face and shows the infant a toy should be less worthy of imitation than one who makes a neutral face and shows them an empty container. I’ll leave it for the reader to judge the plausibility of that prediction.)

To their credit, Poulin-Dubois et al. do acknowledge this possibility – and the need for follow-up studies along the lines I just mentioned – in their discussion. A second flaw is more serious. This is the over-ascription to an entire population of a property that has been demonstrated in a sub-group. (Again, this is all too common in psychology: I am guilty of it myself, in an article where I discussed the implications of children’s generic tendency to tattle on peers, even though  I had observed that several children never tattled at all.) If we look at the actual data for this study, we find that 61% of children imitated the model in the reliable condition, and 34% imitated in the unreliable condition. Assuming that individual performances would be reliable across trials, this suggests that about a third of 14-month-olds do not imitate strangers, about a third do imitate strangers, and about a third are sensitive to the stranger’s “reliability” (or whatever).  This is not at all what the authors are implying in the quotations I made earlier, which is that all infants are sensitive to a model’s reliability.

Image

Fig. 2. Percentage of children who use their forehead or hand to imitate in each reliability condition. (from http://www.sciencedirect.com/science/article/pii/S0163638311000221#bib0065)

I think these two criticisms are particularly strong when put together. Really there is a whole package of differences between the two conditions. Some individuals are likely to be sensitive to some of the differences (e.g. the difference in reliability), others to other differences (e.g. whether they actually get shown a toy). So the main conclusions that we can draw from this study is that imitation will vary according to the social context, and that different individuals are (already, at 14 months) sensitive to different aspects of the social context. Reliability may be one relevant aspect of the social context, but from this study alone, it’s hard to be sure. (This is not really a direct contradiction of what the authors are saying, but semantics is important, as it shapes how we think about what we are studying.)

Actually, though, even this conclusion may be going too far, because my third criticism calls into question whether the authors have even shown a reliable difference in imitation per se. The third, and perhaps the most nefarious, common flaw in experimental psychology is to engineer an analysis that suits one’s conclusions. I didn’t notice this in the current study at first, but became troubled when I realised that they had completely excluded those individuals who did not touch the light switch at all. This might have been fine if more infants had failed to touch the switch in the unreliable condition; but in fact, 10 infants failed to touch it in the reliable condition, compared to only 3 in the unreliable condition!

This is a bit weird. “Fussy” infants (those who do not behave themselves during the experiment) had already been excluded, so I don’t think the problem here is a lack of attention paid to the model. Are we supposed to believe that a complete failure to emulate the goal of the adult (turning on the light) is irrelevant to the analysis? Given the three action possibilities of imitating exactly, emulating the goal, and completely ignoring the model, I can think of three ways of analysing the data:

(1) Define imitation as exact imitation, and compare its frequency with emulating + ignoring

(2) Define imitation as exact imitation + goal emulation, and compare their combined frequency with ignoring.

(3) (The most impartial option): Compare the frequencies of all three types of action across the two conditions.

Ignoring the ignorers is not really a sensible option, because if we reverse-engineer the frequencies of each action (they only give percentages) we get the following:

Exact imitation Goal emulation Ignoring
Reliable condition  14.5 9.5 10
Unreliable condition 10  19 3

It would be interesting to get hold of the raw data to see which differences are statistically significant, but already it is interesting that in both conditions, exact imitation only took place in a minority of cases – not really in keeping with the authors’ message. Furthermore, although the sample size is small it looks like the higher frequency of ignoring in the “reliable” condition is comparable to the lower frequency of emulation. My suspicion is that if all three options were included in the analysis, the impact of condition would be insignifcant in the context of the overall error variance.

This does make me a little wary of the original experiments by Gergely and colleagues, and I will have a look at whether they included “ignoring” in the analysis: it seems a little arbitrary to exclude it. Another revelation for me is that imitation is actively encouraged in the child by the exhortation “Now it’s your turn.” Presumably Gergely did that too, yet his experiments are often discussed (and compared to similar experiments with chimpanzees) as if they are examples of spontaneous imitation. Children’s propensity to take part in imitation games – an activity which it is obviously harder to encourage in chimps – has quite different theoretical implications, it seems to me …

Posted in recent studies, useful for book | Tagged , , , , | 2 Comments

The meaning of “meant to”

Having neglected my blog for a few months, I’m going to be devoting some time to it every Friday – basically, taking Fridays as a “work-for-myself” kind of day.

This next post has been a toughie to get out, as well. In contrast to the first two, which were very applied, this is a bit of a philosophical one. I’m not really sure where I’m going with this, but bear with me: any reactions of any form are welcome!

Some years ago, when I first got into cognitive science, I became interested in the work of the philosopher of language, H. P. Grice. Grice was hugely influential on the development of pragmatics – the study of the little rules and procedures (both linguistic and non-linguistic) that help embed language in a social context and allow speakers to engage in meaningful discourse. Perhaps his biggest contribution was his theory of conversational implicature: a set of unspoken maxims that govern our participation in discourse. Before developing that theory, though – and in some ways in preparation for it – he wrote a famous article called “Meaning“, in which he distinguished between two types of meaning: natural and non-natural.

An example of natural meaning would be the sentence Those spots mean that he has measles. An example of non-natural meaning, That red light means “stop”. In other words, with natural meaning one thing necessarily follows from the thing that means or implies it – it is a fact of life; whereas with non-natural meaning there is a conventional linkage between them – though Grice denies that this linkage is always conventional “in any ordinary sense”, citing “certain gestures”  as counter-examples (without stating which ones! – a rather odd omission, since many gestures are pretty clearly conventional, and many others are equally clearly examples of natural meaning).

Noting the inadequacy of the behaviourist approach of modelling “timeless” non-natural meaning – of the form “(a word) x means (a definition) y” – in terms of a general tendency to produce a certain attitude in an audience, and to be produced by a certain attitude in a speaker, Grice proposes that one promising way to elucidate such timeless meanings is to analyse the meaning of statements like “x meant something (on a particular occasion)”. He ends up with the famous formulation that “‘A meantNN something by x’ is (roughly) equivalent to ‘A intended the utterance of x to produce some effect in an audience by means of the recognition of this intention’.”

I found this article very influential when I first read it at the age of 22, and I especially liked (and still like) his parting shot that “to show that the criteria for judging linguistic intentions are very like the criteria for judging nonlinguistic intentions is to show that linguistic intentions are very like nonlinguistic intentions.” But I am now much more sceptical about whether modelling meaning in terms of intention actually gets us any closer to an understanding of “timeless” meaning. For a developmental psychologist like myself, one obvious problem is that children start using language (and therefore meaning) correctly well before they are capable of doing something like modelling (at least explicitly) whether an audience has recognised their intention. Grice himself recognises that people cannot be doing this kind of computation of intentions (at least explicitly) every time they hear a word. He tries to square this circle by arguing that “an utterer is held to intend to convey what is normally conveyed (or normally intended to be conveyed), and we require a good reason for accepting that a particular use diverges from the general usage”. But since he also writes that the timeless meaning of x (presumably = “the general usage”) might “as a first shot” be equated with what people in general intend to mean by x, this line of argument is not merely circular so much as completely evasive – it just does not seem to get near the nature of timeless meaning at all. Surely the normativeness of “what is normally conveyed” is what is actually at the heart of meaning.

This rather long preamble brings me to the real point of my post, which is to record how I was struck, a couple of months ago, by the point that there is another sense of “meaning” which is not covered by either pole of Grice’s natural/non-natural dichotomy. Namely, the expression “meant to”, as in “He was meant to be here by five o’clock”, “You are meant to put it in neutral before you pull the handbrake”, “That window was meant to go in the east wall of the house, not the west wall”, etc. This is not an example of natural meaning, because as Grice himself points out, natural meanings cannot be put in the passive: you can’t say, “Measles was meant by those spots”, or “Fire was meant by that smoke”. But nor is it an example of non-natural meaning, despite the fact that it has  very close relationship with intentionality (indeed the active form, e.g. “I meant him to be here by five o’clock”, is almost exactly synonymous with “intended”, though for some reason “intended” cannot easily be put in the passive either). One symptom of this is that the meaning cannot be put in scare quotes, as with “A red light means ‘stop'”. Nor is it possible to say something like “That window was meant to be in the top window, but actually it’s this window”.

When I noticed this usage of meant, I was struck by how it chimed with some of the ideas of my former PhD supervisor, Jesse Bering, about how humans are predisposed to think of their lives as having meaning and purpose. This may be a special case of teleological reasoning: the idea (similar to Intelligent Design) that various features of the universe are there for a purpose. Deborah Kelemen and others have argued that teleological reasoning is a natural feature of children’s cognition, and that they have to basically unlearn this way of thinking and realise that some things just happen, without any apparent purpose. I was thinking along these lines because I had in mind the expression You’re meant to …, as in, “You’re meant to wear black tie to this sort of occasion.” (Who is doing the meaning here – society?) It is certainly reminiscent of expressions like “It was meant to be.” (Who is doing the meaning there – God?) But then I started thinking of more tightly focused expressions like “He was meant to be here by now,” and that’s when the comparison with Grice’s argument really hit home.

What hit me was that Grice’s distinction between “personal” meaning (He meant x by this utterance) and “timeless” meaning (this utterance means x) seems to be exactly paralleled by the distinction between “I meant him to be here by five o’clock” and “He was meant to be here by five o’clock”. Who is doing the meaning in the latter case – me? Not necessarily: it could be me, me and you, me and him, me and you and him, or me and any number of other people. Arguably it may not even include me (if other people were entirely responsible for getting him here, and I had no control over them, in what sense did I really “intend” him to be here?) The reference of the agent seems to be always left very vague. And what is really intriguing is that one can’t even clarify it by saying “He was meant to here by someone” – this is actually ungrammatical. I don’t know why, but it is. (Nor can one say, as I mentioned earlier, “He was intended to be here”.) No, this little usage of meant to is actually a micro-example of timeless meaning – of a convention shared by a social group.

This casts some doubt, I think, on Grice’s wisdom in attempting to analyse timeless meaning in terms of personal meaning. If even a small case of two people being meant to be somewhere at the same time does not reduce to the intentions of the individuals involved, then why should we think that the meaning of a linguistic sign reduces to the intentions of speaker and listener? It is far more fruitful, it seems to me, to think of the meaning of any sign as containing a huge element of common ground: an intuitive, unstated understanding of what the sign is referring to. The speaker’s intention interacts with this, sure, but it functions much like a point: this aspect of the thing is what I am talking about. The thing itself is presupposed.

What I am saying probably fits in with what more recent philosophers of collective intentionality (or we-intentionality) such as John Searle, who to some extent has followed in Grice’s footsteps, have been saying. Unfortunately, I don’t know enough about their work to be sure, but I will check them out soon.

One last point to finish off with … coming back to teleological reasoning. We can use meant to with an artefact, as in, “This part is meant to go there.” In such cases, it might be natural to assume that the agent behind the meaning is the designer of the artefact. But if we don’t know the agent for the cases of people being meant to do something, why do we need to know it for artefacts? It seems more plausible to me that, again, this is a statement about how some group of people generally use (or ought to use) the artefact, just as a statement about people being meant to do something is a statement about how some group of people generally behave (or ought to behave). The designer’s original intentions are irrelevant. Hence, the prevalence of teleological reasoning is no argument for the naturalness of belief in some sort of Creator (though it may tempt people into inventing such a figure).

Posted in philosophical ramblings, work in progress | Tagged , , , | Leave a comment

Is aggression adaptive?

Sticking with the aggression theme from my last post on bullying, I’ve just read an early-view article in Aggressive Behavior by some evolutionary psychologists at Binghamton University which is theoretically very interesting. Gallup, O’Brien and Wilson show that aggression in adolescents appears to be positively correlated with dating success. Based on interviewing college students about both their dating history and their histories of aggressing against or being victimised by same-sex peers, their key findings are as follows:

  • Indirectly aggressive females started dating earlier
  • Aggressive males had more dating partners in total
  • Victimised females started dating later, had more partners in total, and engaged in less flirtation with males

I think this is a really good example of both a great strength and a great weakness of evolutionary psychological studies. The strength is to relate areas of human behaviour that had previously been kept in separate research compartments – in this case, intrasexual aggression and dating behaviour – and provide a sound theoretical rationale for why they are related. The idea here is that aggression against same-sex peers is linked to dating success because adolescents are engaged in a form of reputational competition to determine access to mates. Young men use direct aggressive strategies to display dominance, thus making themselves more sexually attractive to women, whereas young women use indirect aggression to derogate rivals, thus making the latter less likely to form lasting, successful partnerships with men. If this idea is true, it obviously has far-reaching implications for many areas of social behaviour.

Furthermore, if this research project linking aggression and dating strategies had not been done, we would not have seen the intriguing result that victimised females tend to lose out in the dating game, whereas victimised males do not (the authors had predicted that neither sex would do well if victimised). This does look like the sort of result that might vary according to the social context (as many gender effects do), but it would certainly be worth testing whether it applies to other populations – it would get us thinking about when victimisation has serious reputational consequences, and when the consequences may be less severe.

But the big weakness here, I think, is that Gallup and colleagues move a little too easily from observations of aggression to the implications for mating success, without enough consideration of the mediating variables. This is a common flaw of this kind of evolutionary study, which tends to get people tearing their hair out about how evolutionary psychologists obsess about sex and ignore the complexities of human behaviour. It’s also a well-known problem with correlational studies: if you find that A correlates with B, it really tells you very little about causality; even if one thinks that it is plausible for A to cause B but not for B to cause A, one can never really be sure that both A and B are not caused by some unknown variable C.

To be fair, the authors do mention some possible mediating variables in the Discussion section. For example, victimisation may decrease dating success in females by decreasing self-esteem as well as by diminishing reputations directly. They also point out that at a lower level, high levels of testosterone in men may increase their aggressiveness at the same time as making them more attractive to women. But why not include such mediating variables in the research design? Testosterone might be tricky, but there must be a multitude of instruments for measuring self-esteem. And there is one really obvious mediating variable which the authors don’t make enough of: popularity. I would bet money that popularity among same-sex peers correlates strongly with dating success – and surprisingly, perhaps, research has shown that aggressive children can actually be quite popular; Gallup and co. cite an article by Patricia Hawley arguing that the most popular kids are those who can be either aggressive or prosocial, as the situation demands.

Like self-esteem, popularity could have been measured very easily, even in retrospect – e.g. by asking participants how many friends they had at school. Popularity in turn would correlate with social competence: and since both dating and facing up to bullies are social activities, it is not surprising that people who are low in social competence might have problems with both. Perhaps the authors’ defence for ignoring such variables in the research design is that evolutionary psychology is concerned with the ultimal level of causation (the effects on selection of particular behaviours) rather than the proximal level (the mechanisms which generate particular behaviours in the individual organism). But if evolutionary psychology is to convince the mainstream, it needs to find some way of relating these two levels.

So how would I do this sort of research differently? Well, above all I would argue that a developmental perspective is essential. Relating peer aggression directly to reproductive outcomes is short-sighted, because humans aggress against peers long before they reach reproductive age: as I know from my own research on tattling, preschool classrooms are filled with conflict and structured by well-defined dominance hierarchies. The really interesting thing about aggression in adolescence is the way that it changes from earlier patterns, due to hormonal and social changes (in particular, a greater identification with the peer group and a withdrawal from adult authority) – presumably in response to the pressure of having to develop potential sexual relationships with members of the other sex. There is plenty of scope for evolutionary hypotheses in this area, but they should start from a basis of what is known about aggression in childhood, and how this is then transformed by the changes that kids undergo in adolescence.

On the whole though, a fascinating article, and more subtle than many evolutionary approaches.

Posted in aggression | Leave a comment