Sunday 28 November 2010

Collecting and interpreting rape statistics

[trigger warning]

Since some of this isn't completely obvious, and so I want to write this down for my own use later if nothing else, here's a post on the difficulties of collecting statistics on the prevalence of rape. A lot of this applies, in other forms, to collecting statistics on just about anything that happens to humans, but because it's working against privileged structures, statistics about rape get criticised more for the same inevitable problems.

A lot of the problems aren't as important as they're made out to be, though, from the point of view of having useful statistics. (Furthermore, despite the problems, the statistics are all relatively consistent to within an order of magnitude)

I'm discussing the problems here from the perspective of statistics on victims; statistics on perpetrators have basically the same sort of problems.

Definitions

The first difficulty is that the colloquial and legal definitions of rape vary considerably. Colloquially rape is sex without consent. Legally, this could be any of "rape", "assault by penetration", "sexual assault" or "legal". Furthermore, the same sexual activity could be rape if A does not consent, but "only" sexual assault if A consents and B does not, because of the asymmetry in the law discussing penetration.

This is to an extent an inevitable problem with the law. The Sexual Offences Act 2003 defines "rape" and "assault by penetration" very precisely. Everything else, whether it would generally be considered rape or not, is "sexual assault". This matters, because the maximum sentence for sexual assault is ten years, whereas rape and assault by penetration have life sentences. Providing a loophole-free legal definition of rape is the same problem as providing a loophole-free definition of sex and adding "without consent" to the end of it.

Add to this the "reasonable belief" exemption in law that means many things that the victim will call rape the law will call "legal" or "no crime".

The problem comes, then, when you try to do any sort of measures of prevalence. People will correctly say that they were raped when it wasn't rape in legal terms. There seems to be some attempt to deal with this in the British Crime Survey's figures on sexual assault, which split "serious sexual assaults" (which include rape and assault by penetration) away from "less serious" (indecent exposure, sexual touching, sexual threats, etc: note that "sexual touching" is in law the same crime as raping someone by forcing them to penetrate you)

If you classify according to the strict legal definition, then you inevitably lose quite a few rapes in there buried in the sexual assault and no crime categories. If you classify according to the colloquial definition, then it becomes much harder to do comparisions with the statistics produced by the criminal justice system (and apologists will claim that your statistics are worthless because they don't only include government-approved rapes)

There's also a question of whether to include the "attempted" types of crimes within the statistics. I think that one should, because the difference between the two is largely down to random circumstances, and either way there's a rapist to apprehend: that they didn't get quite as far as raping this victim (but far enough to make it clear that they were going to try) shouldn't be considered.

Reporting

The second problem is that being a victim of rape has an extremely strong stigma associated with it, and even if it didn't, as a traumatic act it's something that some victims block out of their minds. (Associated with this, misconceptions about consent aren't just confined to rapists, so it's common for people not to define what was done to them as rape until much later, even though they're dealing with the psychological consequences immediately)

So it becomes very difficult to find out if people have been raped by surveying them. The British Crime Survey tries to do this by asking questions about actions rather than about legal definitions, and this does help - around twice as many people will say that someone did [action(s) constituting rape] to them than will say that someone raped them. However, you then inevitably have the situation that the survey is only as good as the questions are exhaustive (and even then despite good methodology people may decline to answer).

We can use the surveys to establish some upper and lower bounds, at least. The 2009/10 BCS data gives 0.4% of women and 0.1% of men, in the preceding year, have been subject to rape, assault by penetration, "serious sexual assault" and/or an attempt at either. We can't necessarily scale up from this to a lifetime prevalence (if we for simplicity assume the likelihood of being raped doesn't change with age1 it gives a likelihood of 17.6% of women and 4.4% of men, not all of whom will be raped in the legal sense).

The NSPCC survey (page 66 onwards) gives figures for sexual violence of around 27% for girls, 16% reporting that they were pressured into intercourse (6% with physical force). There's no particular attempt to match these up to legal categories, but it's obvious here that the risk to 13-16 year old girls (not varying much by age within that) is considerably higher than the average risk for the 16-59 year old adults covered by the BCS. The NSPCC survey notes that figures between 4% and 78% have been found by other surveys of children, with - as with adults - there being a significant gender split in perpetrators and victims.

Meanwhile the Havens survey says that 41% of 18-25 year old Londoners have felt pressured into unwanted sex. 9% of women in the sample had said no and been ignored, and 25% of women (almost certainly a strongly overlapping set) had said nothing and been ignored2.

Measuring the legal system

It's relatively easy, within the provisos of the definition problems, to get statistics on the legal process - convictions, prosecutions, arrests, reports. Relying solely on those statistics is a mistake: the attrition rate from report to conviction has worsened considerably since the 1970s, while the number of convictions has increased. What's happening is that rapes that wouldn't previously have been reported - and marital rapes that were legal until 1991, for that matter - are now being reported ... and the justice system hasn't caught up.

There isn't any comparable prevalence survey going back that far - the BCS only started asking about sexual violence in 2004 - but despite everything, and despite appearances, the justice system is probably better now (with its 90%+ attrition rate, and widely reported failings) than it was in the 1970s with a much lower attrition rate - because most of the attrition was occuring before reporting.

Measures from the justice system aren't useful for measuring incidence of rape, but they are useful for measuring the (in)effectiveness of the justice system (and hopefully improving it). It's important, however, to note that the definitions problem makes it very difficult to compare prevalence statistics with justice system statistics (which makes getting meaningful figures about reporting rates - beyond "very low" - very difficult indeed).

Another problem is that the categorisations used for reporting and police activity - the report to charge stage - use one set of categories (managed by the Home Office), but the categories at the charge to conviction stages use a different set (managed by the Ministry of Justice). This makes sense, because the police often won't know exactly what crime has occurred until after they've investigated, whereas the CPS and courts do know the details of the charges, but it makes comparisions tricky. Kelly, Lovett and Regan's attrition study dealt with this by following cases right through the report to conviction process (or as far through the process as they got, anyway).

The need for statistics

On the one hand, it doesn't really matter at this stage. It's very clear from the statistics that the (lifetime) chances of being raped (in the colloquial sense) are somewhere between 1 in 20 and 1 in 2 for women (and most probably around the 1 in 4 figure generally quoted), and lower (but still probably higher than most people would guess) for men. Wherever it falls within that range, it's still a massive problem (we view murder as a serious problem at the far lower 1 in 10003 lifetime prevalence, and rape is sentenced similarly)

On the other hand, if it's not to remain a problem, there's a need for accurate statistics to monitor things over time, so that it's possible to tell if actions to deal with the problem (by bringing rapists to justice and more importantly because we don't have space for all of them by educating people so they don't become rapists).

Fortunately, for the purposes of accuracy, it doesn't matter than much exactly what definitions you use, as long as you're consistent over time. You'll always only be asking about - and being told about - a particular subset of sexual violence, but you should be able to measure trends in it. The way that rape culture works, it's vanishingly unlikely that one particular form of sexual violence that you're surveying will disappear or expand while the rest remain unchanged.

As long as you remember that changes in methodology are likely to give changes in result significantly larger than any change in the underlying facts, then changes can be seen (and yes, this means sticking with methodology you know is flawed, at least until you've run it in parallel with the improvements for a few survey cycles to see what difference it makes).

A final thing to note - and bear with me here - is that accuracy can be overrated. For year-to-year comparisions about the scale of the problem, a repeatable survey that's not too vulnerable to random noise is needed. For surveys to establish the existence of a problem within a particular context, it's not. There was a recent NUS Women's Campaign survey looking at female university students' experience of sexual violence, which gave the predictable results. It wasn't at all statistically sound: self-selected sample, no attempt to normalise it demographically, massive difference in response rates between universities, etc. but that doesn't actually matter for establishing the existence of the problem.

Now, if universities were to take the NUS survey seriously and start doing (actually useful4) things to reduce sexual violence on campus, then a survey less sensitive to random noise would be needed. But if they were going to take it seriously they'd fund their own surveys for that purpose, and if they're not going to take it seriously a "yes, this is still a problem, what did you expect?" survey is all that's needed.

Footnotes

1 I believe it decreases with age, but I don't have the figures for that.

2 This brings us back to definitions. Silence is not consent, but the law usually takes it as such. But you'd never find out about this set of colloquial rapes/sexual assaults if you didn't ask that specific question.

3 Massive variation with gender, cis/trans status, age, race, class, sexuality, disability, location, etc. As a society we don't view the elevated number of murders of certain non-default individuals as a problem, even if the murder rate as a whole is considered a problem.

4 Putting up "have you considered not getting raped?" posters, for instance, is fairly common and massively counter-productive.