
As technology evolved, the ways abusers took advantage evolved too. Realizing that the advocacy community “was not up on tech,” Southworth founded the National Network to End Domestic Violence’s Safety Net Project in 2000 to provide a comprehensive training curriculum on how to “harness [technology] to help victims” and hold abusers accountable when they misuse it. Today, the project offers resources on its website, like tool kits that include guidance on strategies such as creating strong passwords and security questions. “When you’re in a relationship with someone,” explains director Audace Garnett, “they may know your mother’s maiden name.”
Big Tech safeguards
Southworth’s efforts later extended to advising tech companies on how to protect users who have experienced intimate partner violence. In 2020, she joined Facebook (now Meta) as its head of women’s safety. “What really drew me to Facebook was the work on intimate image abuse,” she says, noting that the company had come up with one of the first “sextortion” policies in 2012. Now she works on “reactive hashing,” which adds “digital fingerprints” to images that have been identified as nonconsensual so that survivors only need to report them once for all repeats to get blocked.
Other areas of concern include “cyberflashing,” in which someone might share, say, unwanted explicit photos. Meta has worked to prevent that on Instagram by not allowing accounts to send images, videos, or voice notes unless they follow you. Besides that, though, many of Meta’s practices surrounding potential abuse appear to be more reactive than proactive. The company says it removes online threats that violate its policies against bullying and that promote “offline violence.” But earlier this year, Meta made its policies about speech on its platforms more permissive. Now users are allowed to refer to women as “household objects,” reported CNN, and to post transphobic and homophobic comments that had formerly been banned.
A key challenge is that the very same tech can be used for good or evil: A tracking function that’s dangerous for someone whose partner is using it to stalk them might help someone else stay abreast of a stalker’s whereabouts. When I asked sources what tech companies should be doing to mitigate technology-assisted abuse, researchers and lawyers alike tended to throw up their hands. One cited the problem of abusers using parental controls to monitor adults instead of children—tech companies won’t do away with those important features for keeping children safe, and there is only so much they can do to limit how customers use or misuse them. Safety Net’s Garnett said companies should design technology with safety in mind “from the get-go” but pointed out that in the case of many well-established products, it’s too late for that. A couple of computer scientists pointed to Apple as a company with especially effective security measures: Its closed ecosystem can block sneaky third-party apps and alert users when they’re being tracked. But these experts also acknowledged that none of these measures are foolproof.
Over roughly the past decade, major US-based tech companies including Google, Meta, Airbnb, Apple, and Amazon have launched safety advisory boards to address this conundrum. The strategies they have implemented vary. At Uber, board members share feedback on “potential blind spots” and have influenced the development of customizable safety tools, says Liz Dank, who leads work on women’s and personal safety at the company. One result of this collaboration is Uber’s PIN verification feature, in which riders have to give drivers a unique number assigned by the app in order for the ride to start. This ensures that they’re getting into the right car.
Apple’s approach has included detailed guidance in the form of a 140-page “Personal Safety User Guide.” Under one heading, “I want to escape or am considering leaving a relationship that doesn’t feel safe,” it provides links to pages about blocking and evidence collection and “safety steps that include unwanted tracking alerts.”
Creative abusers can bypass these sorts of precautions. Recently Elizabeth (for privacy, we’re using her first name only) found an AirTag her ex had hidden inside a wheel well of her car, attached to a magnet and wrapped in duct tape. Months after the AirTag debuted, Apple had received enough reports about unwanted tracking to introduce a security measure letting users who’d been alerted that an AirTag was following them locate the device via sound. “That’s why he’d wrapped it in duct tape,” says Elizabeth. “To muffle the sound.”
Laws play catch-up
If tech companies can’t police TFA, law enforcement should—but its responses vary. “I’ve seen police say to a victim, ‘You shouldn’t have given him the picture,’” says Lisa Fontes, a psychologist and an expert on coercive control, about cases where intimate images are shared nonconsensually. When people have brought police hidden “nanny cams” planted by their abusers, Fontes has heard responses along the lines of “You can’t prove he bought it [or] that he was actually spying on you. So there’s nothing we can do.”