One of my friends on Facebook pointed out a blog entry on the Petrie Multiplier. The basic idea is this. If we assume that men and women are equally sexist, we might assume that men and women will encounter equal amounts of sexism. However, that is not the case if the populations are unequal. There are more men making sexist remarks, and fewer women to encounter them, so women actually encounter far more sexism than men. In fact, the difference in encountered sexism is the square of the ratio between the sexes.
The basic idea here seems sound. However, the assumption that people have a fixed number of sexist remarks to make is unrealistic. It has sexists searching out women if they can’t find them.
I got interested, so I wrote a python script to simulate something more realistic. The conditions are as follows.
Men and women have the same probabilities of making a sexist remark in a conversation. 50% of both sexes never do. 10% have a 20% chance of making a sexist remark, 10% have a 40% chance, and so on. In keeping with the original, 80% of the population are men, and 20% are women.
Every conversation includes a random sample of people from the whole population (which includes 50 people, to have one woman with every level of sexism, and the corresponding number of men). 30% of conversations involve 2 people, 20% involve 3, and 10% each involve 4, 5, 6, 7, and 8.
There is one other condition. People only make sexist remarks if they are not outnumbered, in that conversation, by members of the opposite sex. In a one-on-one conversation, either side may be sexist.
The script then counts up the number of sexist remarks directed against their own sex encountered by each member of the population, over a total of 500 meetings. (Note that each member only participates in a few of those meetings.)
The results of one run, in increasing order of sexist remarks encountered, look like this:
Men who encountered 0 sexist remarks: 34 (85%)
Men who encountered 1 sexist remark: 4 (10%)
Men who encountered 3 sexist remarks: 2 (5%)
Women who encountered 29 sexist remarks: 1
Women who encountered 36 sexist remarks: 1
Women who encountered 39 sexist remarks: 1
Women who encountered 40 sexist remarks: 1
Women who encountered 41 sexist remarks: 1
Women who encountered 45 sexist remarks: 1
Women who encountered 47 sexist remarks: 1
Women who encountered 49 sexist remarks: 2
Women who encountered 50 sexist remarks: 1
The results are broadly similar if I re-run the script, although the precise numbers obviously change.
It is important to note that men and women are equally sexist in this model. Nevertheless, women suffer from overwhelmingly more sexism.
What happens if we drop the probability of sexism, so that only 10% of men and 10% of women make sexist remarks, and then only do it 20% of the time?
The results of one 500-encounter run look like this:
Men who encountered 0 sexist remarks: 40 (100%)
Women who encountered 1 sexist remark: 2
Women who encountered 2 sexist remarks: 3
Women who encountered 3 sexist remarks: 2
Women who encountered 4 sexist remarks: 1
Women who encountered 5 sexist remarks: 1
Women who encountered 8 sexist remarks: 1
So, even in a situation in which sexism has been almost completely eliminated, women are still encountering a substantial amount of sexism. Indeed, because the logic is independent, we can produce representative results for a situation in which women are far, far more sexist than men, in that women keep the original chances, and thus half of them make sexist remarks at least sometimes, while only 10% of men ever make sexist remarks, and they only do it 20% of the time. We just paste together the results for men from the first run, and for women from the second. The results look like this:
Men who encountered 0 sexist remarks: 34 (85%)
Men who encountered 1 sexist remark: 4 (10%)
Men who encountered 3 sexist remarks: 2 (5%)
Women who encountered 1 sexist remark: 2
Women who encountered 2 sexist remarks: 3
Women who encountered 3 sexist remarks: 2
Women who encountered 4 sexist remarks: 1
Women who encountered 5 sexist remarks: 1
Women who encountered 8 sexist remarks: 1
In other words, given the gender imbalance, women will experience far more sexism than men even if women are far more sexist than men.
The assumptions here are only borderline realistic, but the results should give both sides in the debate pause. It makes it overwhelmingly likely that there is a serious problem with sexism against women in tech, and no problem with sexism against men, at the community level. However, that fact is no evidence that men in tech are, individually, more sexist than women in tech.
Here is the original script (Python 3.3, and I have absolutely no idea whether that matters), which may contain glaring errors as it is the first python program I ever wrote. Yes, the above results might be drivel. The logic looks OK to me, and the probabilities must be the right way round because reducing them reduced the amount of sexism. Still, approach with caution.
Edit 2014/02/09: I’ve added some more comments to the code.
Edit 2014/12/10: Thanks to Kim, I’ve formatted this to preserve the indentation. Pre tags!
import random # Establish the list of sexism probabilities. probabilities = [1, 0.8, 0.6, 0.4, 0.2, 0, 0, 0, 0, 0] sex = ['male', 'male', 'male', 'male', 'female'] population = [] x = 0 # This section sets up the population. Each element is a person. w is their sex, v how likely they are to make sexist remarks, x their number in the population, and the final element is the number of sexist remarks they have encountered. for i, v in enumerate(probabilities): for j, w in enumerate(sex): population.append([w, v, x, 0]) x = x + 1 print(population) group = [2, 2, 2, 3, 3, 4, 5, 6, 7, 8] msexist = 0 fsexist = 0 # The for loop does the 500 meetings. for count in range(500): # Choose the group size. size = random.choice(group) # Choose the appropriate number of people randomly from the population. meeting = random.sample(population, size) print(meeting) # Initialise the number of men, women, and sexist remarks. men = 0 women = 0 msexist = 0 fsexist = 0 # Count the number of men and women in the group. for i, v in enumerate(meeting): if v[0] == 'male': men = men + 1 else: women = women + 1 print(men) print(women) # Check for sexism. # First, if there are at least as many men as women, check to see whether the men make sexist remarks. If they do, increase the count of sexist remarks made by men by one. if men >= women: for i, v in enumerate(meeting): if v[0] == 'male': if v[1] >= random.random(): msexist = msexist + 1 # Next, if there are more women than men, do the same for women. This should be "equal to or greater", but I think using elif here means that this section is skipped when the numbers are equal. Given that equal numbers will be rare, that shouldn't affect the results too much, but there was a logic problem in the code. elif women >= men: for i, v in enumerate(meeting): if v[0] == 'female': if v[1] >= random.random(): fsexist = fsexist + 1 # For every man in the group, add the number of sexist remarks made by women to the number of sexist remarks he has encountered. Then copy him back into the population. (I suspect that this is unnecessary, because Python actually operates on the elements on the population rather than on clones, but having taught myself Python to write this code, I'm not sure.) for i, v in enumerate(meeting): if v[0] == 'male': v[3] = v[3] + fsexist population[v[2]] = v # For every woman in the group, add the number of sexist remarks made by men. else: v[3] = v[3] + msexist population[v[2]] = v # Sort the population into order by number of sexist remarks, because the final analysis is done by hand. population.sort(key=lambda population: population[3]) print(population)
Leave a Reply