An article in the New York Times by Farhad Manjoo suggested that smartphones should be equipped with the ability to detect when the user is taking a naked photograph of themselves. The phone would then warn the user, and propose encryption, password protection and restriction on cloud back-ups. The aim of this, states Manjoo, is harm reduction, in that it enables the protection of potentially damaging intimate photographs. Despite criticising Snapchat’s faulty security features, Manjoo then proposes the use of a slightly different technology (using the iPhone’s fingerprint scanner) which he assures us will make copying pictures impossible. That is of course until it doesn’t.
A response piece in Forbes by Woodrow Hartzog and Evan Selinger pointed out that this approach was problematic, in that it proposed a technological solution to what should be a personal ethical choice. Their concern was that the technology would be replacing the user’s capacity to make decisions, although they concluded that the ‘opt-out’ idea was preferable, in which detection software would be automatically engaged on the phone, and it was up to the user to turn it off and take matters into their own hands.
What neither article suggested was that this is not actually an issue for smartphone manufacturers to solve, but rather a social issue. Although I am all in favour of users being given the ability to encrypt their photographs, we can’t expect technology to protect us if we are unwilling ourselves to change the very attitudes that do the real damage. For naked selfies are not the problem here – it is the means by which they are used to marginalise the people who take them. A photograph doesn’t mean anything until we ascribe significance to it, and the meanings given to naked selfies are reflective of much wider social inequalities, which create a paradox for women involving the expectation of / punishment for sexual display.
I suspect that the ‘change social opinions’ option is not mentioned in either article because it’s not straightforward. But the equivalent in the world of, say, motoring would be to emphasise car safety at the point of design and manufacture without enforcing any sort of driving code. We simply can’t expect machines to protect us, and other people, if we are not willing to put in the work too. Otherwise we’re blaming the person hit by another driver for not having a safe enough car themselves.
We can see the division between easy / hard solutions described by Manjoo:In some ways, naked selfies are more valuable than money — if your bank or credit card gets hacked, insurance will most likely make you whole, but if your private photo gets out, you’re hosed.
So money can be quickly reimbursed, but a reputation is more difficult to repair. Does that mean we should try not to change attitudes, simply because it’s hard? Surely it’s much easier just to build an algorithm that can detect naked selfies and trigger a warning? But this technological quick-fix further consolidates the problem it is meant to be alleviating, by identifying naked images as wrong, dangerous and to-be-hidden.
The truly effective way in which reputations could be protected from the damage wrought by naked selfies is if we collectively resisted the urge to condemn users in the first place. If the knee-jerk reaction was not to blame the subject – for not encrypting their picture, for sharing, for taking it at all – then the problem would be radically reduced. But this seems impossible, doesn’t it, as there’s no feature on the iPhone that we can engage in order to make this happen.
So rather than ask our phones to scan and police the morality of our own behaviour, what these articles suggest is that we’re actually expecting technology to ameliorate our own prejudices against the behaviour of others. If we’re not willing to change our own attitudes and desires to punish other people’s use of photography, then presumably we can just get phones and code to do the hard moral work for us.