The Petrie Multiplier

One of my friends on Facebook pointed out a blog entry on the Petrie Multiplier. The basic idea is this. If we assume that men and women are equally sexist, we might assume that men and women will encounter equal amounts of sexism. However, that is not the case if the populations are unequal. There are more men making sexist remarks, and fewer women to encounter them, so women actually encounter far more sexism than men. In fact, the difference in encountered sexism is the square of the ratio between the sexes.

The basic idea here seems sound. However, the assumption that people have a fixed number of sexist remarks to make is unrealistic. It has sexists searching out women if they can’t find them.

I got interested, so I wrote a python script to simulate something more realistic. The conditions are as follows.

Men and women have the same probabilities of making a sexist remark in a conversation. 50% of both sexes never do. 10% have a 20% chance of making a sexist remark, 10% have a 40% chance, and so on. In keeping with the original, 80% of the population are men, and 20% are women.

Every conversation includes a random sample of people from the whole population (which includes 50 people, to have one woman with every level of sexism, and the corresponding number of men). 30% of conversations involve 2 people, 20% involve 3, and 10% each involve 4, 5, 6, 7, and 8.

There is one other condition. People only make sexist remarks if they are not outnumbered, in that conversation, by members of the opposite sex. In a one-on-one conversation, either side may be sexist.

The script then counts up the number of sexist remarks directed against their own sex encountered by each member of the population, over a total of 500 meetings. (Note that each member only participates in a few of those meetings.)

The results of one run, in increasing order of sexist remarks encountered, look like this:

Men who encountered 0 sexist remarks: 34 (85%)
Men who encountered 1 sexist remark: 4 (10%)
Men who encountered 3 sexist remarks: 2 (5%)
Women who encountered 29 sexist remarks: 1
Women who encountered 36 sexist remarks: 1
Women who encountered 39 sexist remarks: 1
Women who encountered 40 sexist remarks: 1
Women who encountered 41 sexist remarks: 1
Women who encountered 45 sexist remarks: 1
Women who encountered 47 sexist remarks: 1
Women who encountered 49 sexist remarks: 2
Women who encountered 50 sexist remarks: 1

The results are broadly similar if I re-run the script, although the precise numbers obviously change.

It is important to note that men and women are equally sexist in this model. Nevertheless, women suffer from overwhelmingly more sexism.

What happens if we drop the probability of sexism, so that only 10% of men and 10% of women make sexist remarks, and then only do it 20% of the time?

The results of one 500-encounter run look like this:

Men who encountered 0 sexist remarks: 40 (100%)
Women who encountered 1 sexist remark: 2
Women who encountered 2 sexist remarks: 3
Women who encountered 3 sexist remarks: 2
Women who encountered 4 sexist remarks: 1
Women who encountered 5 sexist remarks: 1
Women who encountered 8 sexist remarks: 1

So, even in a situation in which sexism has been almost completely eliminated, women are still encountering a substantial amount of sexism. Indeed, because the logic is independent, we can produce representative results for a situation in which women are far, far more sexist than men, in that women keep the original chances, and thus half of them make sexist remarks at least sometimes, while only 10% of men ever make sexist remarks, and they only do it 20% of the time. We just paste together the results for men from the first run, and for women from the second. The results look like this:

Men who encountered 0 sexist remarks: 34 (85%)
Men who encountered 1 sexist remark: 4 (10%)
Men who encountered 3 sexist remarks: 2 (5%)
Women who encountered 1 sexist remark: 2
Women who encountered 2 sexist remarks: 3
Women who encountered 3 sexist remarks: 2
Women who encountered 4 sexist remarks: 1
Women who encountered 5 sexist remarks: 1
Women who encountered 8 sexist remarks: 1

In other words, given the gender imbalance, women will experience far more sexism than men even if women are far more sexist than men.

The assumptions here are only borderline realistic, but the results should give both sides in the debate pause. It makes it overwhelmingly likely that there is a serious problem with sexism against women in tech, and no problem with sexism against men, at the community level. However, that fact is no evidence that men in tech are, individually, more sexist than women in tech.

Here is the original script (Python 3.3, and I have absolutely no idea whether that matters), which may contain glaring errors as it is the first python program I ever wrote. Yes, the above results might be drivel. The logic looks OK to me, and the probabilities must be the right way round because reducing them reduced the amount of sexism. Still, approach with caution.

Edit 2014/02/09: I’ve added some more comments to the code.

Edit 2014/12/10: Thanks to Kim, I’ve formatted this to preserve the indentation. Pre tags!

import random

# Establish the list of sexism probabilities.

probabilities = [1, 0.8, 0.6, 0.4, 0.2, 0, 0, 0, 0, 0]

sex = ['male', 'male', 'male', 'male', 'female']

population = []

x = 0

# This section sets up the population. Each element is a person. w is their sex, v how likely they are to make sexist remarks, x their number in the population, and the final element is the number of sexist remarks they have encountered.

for i, v in enumerate(probabilities):
    for j, w in enumerate(sex):
        population.append([w, v, x, 0])
        x = x + 1

print(population)

group = [2, 2, 2, 3, 3, 4, 5, 6, 7, 8]

msexist = 0
fsexist = 0

# The for loop does the 500 meetings.

for count in range(500):

#   Choose the group size.

    size = random.choice(group)

#   Choose the appropriate number of people randomly from the population.

    meeting = random.sample(population, size)

    print(meeting)

#   Initialise the number of men, women, and sexist remarks.

    men = 0
    women = 0
    msexist = 0
    fsexist = 0

#   Count the number of men and women in the group.

    for i, v in enumerate(meeting):
        if v[0] == 'male':
            men = men + 1
        else:
            women = women + 1

    print(men)
    print(women)

#   Check for sexism.
#   First, if there are at least as many men as women, check to see whether the men make sexist remarks. If they do, increase the count of sexist remarks made by men by one.

    if men >= women:
        for i, v in enumerate(meeting):
            if v[0] == 'male':
                if v[1] >= random.random():
                    msexist = msexist + 1

#   Next, if there are more women than men, do the same for women. This should be "equal to or greater", but I think using elif here means that this section is skipped when the numbers are equal. Given that equal numbers will be rare, that shouldn't affect the results too much, but there was a logic problem in the code.

    elif women >= men:
        for i, v in enumerate(meeting):
            if v[0] == 'female':
                if v[1] >= random.random():
                    fsexist = fsexist + 1

#   For every man in the group, add the number of sexist remarks made by women to the number of sexist remarks he has encountered. Then copy him back into the population. (I suspect that this is unnecessary, because Python actually operates on the elements on the population rather than on clones, but having taught myself Python to write this code, I'm not sure.)

    for i, v in enumerate(meeting):
        if v[0] == 'male':
            v[3] = v[3] + fsexist
            population[v[2]] = v

#   For every woman in the group, add the number of sexist remarks made by men.

        else:
            v[3] = v[3] + msexist
            population[v[2]] = v

# Sort the population into order by number of sexist remarks, because the final analysis is done by hand.

population.sort(key=lambda population: population[3])

print(population)

Posted

in

by

Tags:

Comments

32 responses to “The Petrie Multiplier”

  1. Ian Gent avatar

    Thanks David, that’s very interesting. The comment about quotas of sexist remarks is valid and has been made by a number of people.

    So it’s interesting to see this model where the effect remains even with that taken away.

    Also your point about it being true even when women are more sexist than men is good. One of the things I think about is that if a man says “so you’re saying I have to walk on eggshells around women” is that we can say “YES!” and point to this to say, because you have to be less sexist than women. (Assuming there are more men than women in the given environment.

    I’ve put a link to this from the original Petrie Multiplier blog post.

  2. David Chart avatar
    David Chart

    Thanks for the comment, Ian. I’m glad you found the analysis useful.

    I agree with you about men needing to walk on eggshells. If men want more women to enter a group, they have to work really hard to create a welcoming environment. The same, naturally, applies to women when they are the numerically dominant group.

    At the same time, the problem arises even when the overwhelming majority of men are never sexist, and the few who are, are not sexist the overwhelming majority of the time. That means that eer-more-vigorous exhortations to men to avoid sexist comments are unlikely to be effective in solving the problem; you don’t need many defectors to sustain it.

    Thus, I suspect that once it’s made clear that men need to walk on eggshells to avoid contributing to the problem, and be much less sexist than women, countermeasures will have to focus on things like mentoring programs, and providing men with positive suggestions on how to help women to enter the field. That has the further advantage of not sounding like it is attacking men. But, being neither female nor in the tech field, I don’t have any concrete suggestions to offer.

  3. Ian Gent avatar

    I think your model actually amplifies the effect of the Petrie Multiplier: your first set of numbers are more dramatic than mine, not less so.

    At a guess this is because you (reasonably) propose that sexist remarks are not made if one gender is outnumbered. Which basically makes women will almost never make sexist remarks except in one-on-ones.

  4. Shauna avatar
    Shauna

    Interesting post!

    A couple of suggestions that might increase the clarity of post:

    – Indenting the python code
    – You write, at the beginning, “There is one other condition. People only make sexist remarks if they are not outnumbered, in that conversation, by members of the opposite sex. In a one-on-one conversation, either side may be sexist.” I would change the last line to “In conversations with equal numbers of each sex, either side may be sexist.” Specifying one-on-one implies to me that in two-on-two, three-on-three, or four-on-four conversations no one is sexist.
    – Some in-code commenting might be useful. I mostly understood your decisions but I’m not sure what the last block (ending with “population[v[2]] = v”) is for.

  5. Liz avatar
    Liz

    I find that the first set of conditions produce results most like those that I have personally experienced in my field, physics. I like the fact that you evaluated a model that assumes equal sexism in both genders. Although I do not believe this assumption represents reality, I think it is the most appealing for thought experiment.

    I am interested in the implications for sexual harassment and assault. It is well documented that, in society at large (i.e., with the percentages of men and women being equal), women experience more sexual harassment and assault than men (US Department of Justice, 2003). If we assume, however, that women and men are equally likely to sexually harass someone of the opposite gender and take the results of your model into account, we can project that women will experience a significantly higher incidence of sexual assault in fields where they are a minority.

    I am also interested in the applications of this model to racism.

  6. […] the opposite sex if there’s not one handy, just to be mean to them. A gent named David Chart ran the numbers correcting for that, reducing the chance that sexist people would aggress in the first place. He […]

  7. David Chart avatar
    David Chart

    @Shauna: Thanks for the comment. I’ve added comments to the code. The source for the page has the python code indented; I thought code tags would have preserved that, but apparently not.

    @Liz: I’m glad you found it interesting. The structure applies directly to racism, of course. You’d just need to modify the percentages. Any member of a minority will experience prejudice against that minority as being more prevalent than the majority will.

  8. Shauna avatar

    Code tags don’t preserve indentation because HTML itself (well, browsers, technically) ignore more than one standard space character in a row. To get them indented properly, you need to convert your spaces to   (the non-breaking space HTML encode) or indent them using other means, such as CSS or your content editor’s “indent” button (if it’s a WYSIWYG editor).

  9. David Chart avatar
    David Chart

    Oh well. I don’t have enough time to mess with that right now, so people will have to cope for the moment. I knew about spaces being ignored; that was what I though the code tag would override.

  10. Vrimj avatar
    Vrimj

    @Liz
    I wonder if you factor in the idea that people generally only assault people that they think they can overpower if the model would work, then the assault ratio reflects the disparity in both physical and social power that in general happens between genders….

  11. […] this works, please see Ian Gent’s post about the Petrie Multiplier and David Chart’s elaboration. You’ll dig both of them, promise. They involve math and Python, […]

  12. […] also led me to the blog posts of Ian Gent and David Chart, who talk about gender disparity in Computer Science (and implicitly, in other fields as well) and […]

  13. Kim avatar

    If you use pre instead of code, spaces will be preserved.

  14. David Chart avatar
    David Chart

    Thank you! I was sure there must be a tag for that, but I didn’t have time to track it down. Now the code will be easier to read, and people with scary coding resumes will be able to find all the mistakes…

    Maybe I should leave it as it is.

  15. denis avatar
    denis

    Your results look like over-impressive, including compared to Ian Gent’s original model. On one hand, your correction about a “quota” of sexist acts is certainly welcome: indeed, men cannot always be sexist since there’s not always a “victim” at hand. This _should_ reduce the result’s unbalance. However, this is not the case, obviously –see especially your fist set of outcomes. I guess your condition that noone makes sexists remarks when in a conversation where one’s sex is outnumbered may well greatly counter-balance your first correction: obviously, everything else equal, woman can only rarely be sexist, while men can nearly always be. denis

  16. David Chart avatar
    David Chart

    Thanks for the comment. I think it’s a plausible assumption, though, and one point of the analysis is that, under these assumptions, women will experience more sexism even if women are, on average, much more sexist than men.

  17. Nick Johnson avatar

    Slight problem: In a conversation filled entirely with one gender, people can still make sexist remarks. Which is certainly true, but doesn’t reflect the impact of sexism on minorities.

  18. David Chart avatar
    David Chart

    Thanks for the comment. That’s true, but it shouldn’t affect the results at all. In groups filled with only one gender, the remarks may be made, but they are not heard by a member of the other gender, so they are not added to the totals given in the results. The script was set up to measure not the total number of sexist remarks made, but the total number experienced by each member of the group.

  19. Nick Johnson avatar

    Right, but that’s not what it does: You check if there are more men than women, then check if each man makes a sexist comment, and if so, you increment the count of sexist comments made – even if there were no women in the conversation. You do the same again for women. So, you’ll count sexist comments even in conversations containing only people of one gender.

  20. David Chart avatar
    David Chart

    That’s true, but then the last bit of the code only adds the number of sexist comments from men (msexist) to the comments heard by a member of the group if that member is female. msexist is reset to zero at the beginning of the loop, so although the comments are counted in an all-male group, they are not recorded.

    It’s not necessarily the most efficient way to do it (the code could check whether the group is mixed first, and not bother checking comments if it is not), but it does work. The women in the sample experience different numbers of sexist comments, which shows that the code does not just gather all the sexist comments.

  21. Nick Johnson avatar

    You’re quite right, I missed that bit.

    I do think this is far more complicated than it has to be. All you really need to do is observe that if you have 100 people in a group, 80 of whom are male, then 20% of a woman’s conversations will be with men, and only 20% of a man’s conversations will be with a woman. If you assume both genders are equally sexist, women will have 4 times the opportunities to encounter sexist comments than men in 1:1 conversations.

    It seems to me that this demonstrates the effect much more simply than either simulation, doesn’t make assumptions about seeking people out or about comments in ‘outnumbered’ conversations, and doesn’t require doing monte-carlo simulations to boot. 🙂

  22. Nick Johnson avatar

    Sorry, I meant to say “80% of a woman’s conversations will be with men and only 20% of a man’s conversations will be with a woman”.

  23. David Chart avatar
    David Chart

    The problem with that simple analysis is that it suggests that women will encounter about four times as many sexist comments as men. As the monte carlo simulation shows, however, it’s closer to 40 times. Women have more chance to be outnumbered, and are very unlikely to be in an all-female group, while men are quite likely to be in all-male groups, and that pushes the ratio right up.

    That sort of thing is very hard to eyeball, unless you have experience of running simulations and know what tends to come up. Hence the monte carlo simulation.

  24. Nick Johnson avatar

    But the monte-carlo simulation makes a number of possibly unwarranted assumptions, including:
    – People won’t make sexist comments when outnumbered
    – People are equally likely to be in a conversation with 500 people as with two
    – In a conversation with equal numbers of men and women, the men will be sexist but the women won’t(!)

    Given all that, I don’t think there’s any reason to assume this simulation is more accurate than the much simpler one I proposed.

    If your purpose is to point out that a minority’s experience of discrimination doesn’t need to bear any relationship to relative sexism levels of the parties – which is both entirely valid and a very useful point to make – I think it makes a lot more sense to propose the simplest possible demonstration, because it’s both more easily grasped and harder to dispute, even if it makes a less dramatic demonstration than an “ideal” simulation.

  25. David Chart avatar
    David Chart

    — People won’t make sexist comments when outnumbered is possibly incorrect, but, psychologically, not unwarranted, I would say.
    — The largest group in the simulation is 8, not 500, and has one third the probability of a group of 2.
    — Women were supposed to be sexist in a balanced group. That’s a bug, and you can fix the code and try it if you like.

    In any case, this is the first simulation I ran, so I have no idea what the results would look like from the alternatives. Allowing people to be sexist when outnumbered should certainly balance things up a bit, but it doesn’t strike me as that plausible. A man talking to two women or three women is not likely to say that women can’t code, in my opinion. Women, in a field in which they are vastly outnumbered, are even less like to do so. Still, it would be easy to take the test out and run the script; that’s why I provided the code.

  26. Nick Johnson avatar

    You’re right again about the group size; my apologies.

    The >= check in the women branch is in an elif, so it won’t run if the ‘if’ ran. If women==men, the first branch will run, and check if men are sexist, but then the elif branch won’t run – thus, women can only be sexist when they strictly outnumber men.

    Bugs aside, I’m not saying your simulation is invalid – I just think there’s a significant advantage in showing the effect in the simplest way, and with the fewest complications possible. Especially when you can demonstrate it exhaustively numerically, rather than requiring a simulation at all!

  27. Chris avatar
    Chris

    I just tried running this with an if instead of the elif. It’s still a vast difference, but the men are a lot more likely to have encountered 1 or 2 or as many as 7 sexist remarks. None of them encountered 8 or more and certainly none encountered 50, but it does make a significant difference. I’m guessing because of all the 2 person conversations.

  28. David Chart avatar
    David Chart

    @Nick: I’m not sure that the simplest case is actually realistic either. It assumes that the context of a conversation makes no difference to whether someone is likely to make a sexist remark, for example, and that strikes me as completely implausible.

    @Chris: Thanks. That sounds reasonable.

    Another possibility that occurred to me is that people might need an “ally” to be sexist. That is, if you’re the only person of your gender in the group, you don’t make sexist remarks, but if there’s at least one other person of your gender, you might make the remarks even if you are outnumbered. This does mean that there will be no sexism in one-on-one conversations, but there’s a lot of psychological research on the importance of having allies, so the assumption is not completely implausible.

    I suppose that one thing that comes clearly out of this discussion is that the precise shape of the predicted problem depends on your assumptions, but the broad outline is probably quite robust. Maybe I should run the simulations of some of the other versions.

    Thanks for the comments.

  29. Nick Johnson avatar

    @David, I’m not sure what you mean by the context of the conversation. Can you elaborate?

  30. David Chart avatar
    David Chart

    @Nick, I mean that it strikes me that someone who is the lone woman (or man) in a group of five is unlikely to make sexist comments about the majority. Similarly, someone who is aware that the other people in the group hold power over their career is less likely to be derogatory about them. You could even argue that basic politeness will mean that people should be less likely to make derogatory comments when the people who are the target of those comments are present.

    I’m not saying that the simple model is necessarily wrong; the fact that a woman is four times as likely to encounter a man as vice versa is clearly fundamental to all the imbalances that the various models find. However, the simple model is no more free of controversial assumptions than the more complex ones. They’re just different assumptions.

  31. Nick Johnson avatar

    @David, all of that’s why I think it makes sense to simplify the model as much as possible. In the model I proposed, only two-party conversations are considered. Since we can’t make predictions about how people behave in multi-way conversations that everyone’s likely to agree on, why complicate matters by introducing them, when we can show the same effect more simply without them?

  32. David Chart avatar
    David Chart

    Because I originally wrote this post in response to a different blog post that used a simpler model to show the effect, and was criticised for over-simplifying and being unrealistic.

    At some point, I will try to find time to do a follow-up, running the simulation with various different assumptions to see how that affects the frequencies. If it is taken to that stage, it might actually be possible to match the models to experience, and get some idea of the likely level of sexism in tech. That matters, because if the problem is entirely due to the Petrie multiplier, further efforts aimed at educating men to not be sexist are likely to be ineffective. Instead, it would be important to work on getting more women into tech. On the other hand, if the numbers suggest relatively high levels of sexism among men, educational programs aimed at men would be useful.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.