Ch 9: Helping With Tech Abuse
This chapter provides pragmatic suggestions about how to approach helping IPV survivors with technology abuse. A key challenge is that technology changes quickly, causing advice tailored to current technology systems to rapidly become obsolete. Relatedly, consultants who are intimidated by the complexities of cybersecurity may feel like they lack sufficient expertise to help clients. The goal of this chapter is to help readers realize that even a small amount of preparation has the potential to support many clients.
In this chapter, we provide general guidance about typical abuse issues, suggestions for structuring discussions with clients about technology (tech) abuse, strategies for researching unfamiliar tech situations, and managing the inherent uncertainty about real and perceived capabilities of abusers.
In this chapter:
Introduction to Technology Abuse
We begin by discussing what technology abuse is in the context of intimate partner violence. Technology abuse includes any actions taken by an abuser to threaten, monitor, harass, or otherwise harm their victim using digital means. In other words, digital technologies serve as tools to engage in long-standing patterns of coercively controlling behavior. A non-comprehensive set of examples of the types of harm that clients may experience include:
Types of Technology Abuse
Monitoring: Using technology to monitor the survivor’s communications (e.g.: messages, who the survivor is contacting, phone calls), data (e.g.: photos, videos, documents, emails), behavior online (e.g.: websites visited), or physical environment (e.g.: hidden cameras, microphones in the home).
Tracking/stalking: Using technology to keep tabs on the location of the survivor.
Harassment: Sending unwanted contact to the survivor, including SMS or chat messages, social media contact, phone calls, or making visible comments on social media posts, etc.
Proxy harassment: Arranging for members of a common social network or strangers to harass the survivor or signing the survivor up for unwanted messages.
Disclosure: Disclosing online (sometimes called ‘doxxing’) private information, such as non-consensual intimate images (NCII, frequently known as 'revenge porn'), home address or phone, sexual identity, HIV status, etc.
Impersonation: Pretending to be the survivor to cause reputational damage or to facilitate proxy harassment (e.g., pretending to be the survivor on a dating website and tricking people into visiting the survivor’s home).
Financial harms: Causing financial harm by accessing online bank accounts or coerced spending through peer-to-peer payment and e-commerce apps (Venmo, CashApp, Amazon, etc).
Most of these types of abuse have non-digital analogues, and abusers may use both digital and non-digital means to coercively control the survivor. This can make it challenging to diagnose technology issues based on symptoms alone, and consultants should keep in mind that technology isn’t the only plausible explanation for many harms.
For example an abuser can cause financial harm by stealing money, by using a known credit card number, and/or by accessing an online bank account. Or, they could stalk a client by physically following them, by seeing pictures of the survivor at recognizable locations online, by covertly turning on location-sharing, by installing tracking software on a survivor device, and/or by using a tracking device (GPS or Bluetooth device, like AirTags).
Types of intimate partner threats
There are unique considerations and dynamics to consider when technology abuse occurs within the context of intimate partner violence. Due to the relationship, the abuser will have some amount of personal knowledge about the survivor and may be part of their social network, may have had (coerced or freely given) access to devices and accounts, and may be motivated by coercive control rather than simply financial gain. In this context, the typical methods in which an abuser causes those harms falls into a few broad categories of abuser techniques; note that these differ from the above list in that they are how an abuser may cause harm:
Methods of Technology Abuse
Ownership-based access refers to problems that emanate out of the fact that the abuser may be the one who owns or sets up technology used by the survivor. For example, they may have been the one who pays for a cellular phone plan or who set up a family’s iCloud account.
Account compromise occurs when an abuser is able to access a survivor’s online accounts, such as email, iCloud, or social media, most often by simple expedient of knowing the password.
Device compromise arises when the abuser is able to access a device (and be able to unlock it, such as by knowing a PIN, password, or having access to a biometric such as fingerprint). This allows them to reconfigure the device (e.g., add a new fingerprint, turn on location sharing), access sensitive data, and more.
Fake accounts / spoofing: The abuser may set up fake accounts online, disguise their phone number ('spoofing'), or use new, unrecognized emails or phone numbers. This doesn’t require compromising accounts or devices, and usually arises in the content of online harassment or impersonation.
Use of IoT devices: The abuser may set up devices, such as GPS tracking devices, webcameras, WiFi routers, WiFi thermostats, and more, often termed Internet of Things (IoT) placed in the survivor's home, vehicle, or workplace.
Core security concepts
Helping with technology abuse benefits from some understanding of key concepts in computer security. Many technology users may be familiar with some of these concepts, but here, we reframe them in the context of tech abuse. We focus on client-owned devices, abuser-owned devices and client-owned accounts and discuss fundamental properties of each that relevant for mitigating harm.
Devices and their security
Device is a catch-all term for phones, computers, "smart" devices, and the “Internet-of-Things”; essentially anything that has computing built in. Devices consist of hardware plus software, and the combination of the two define the functionality of the device. Phones can surf the internet, take pictures, record sound, and more. Home devices like voice assistants can listen to conversations, perform Internet searches, or react to particular requests.
Operating Systems and Applications
Devices have operating systems (OS’s). These are the lowest layer of software running on a device, and control and limit functionality of other software programs (programs, often called “apps”). For example, Windows is the OS running on personal computers (PCs), and MacOS runs on Macbooks and other Apple computers. On phones, the most common operating systems are iOS which runs on iPads, iPhones, and other Apple products, and Android which runs on most other phones. Other devices (tablets, IoT) are similar; in each case, you can install more programs, like word processors, Internet browsers, etc. These programs are often called "apps", short for applications.
The OS places limits on apps. For example, the OS will, by default, prevent one app installed on a phone (both spyware/stalkerware and other non-malicious apps) from reading other apps’ data without asking for explicit permission. What any app can or cannot do can be nuanced, and also evolves as OS’s change over time.
"Hacking" a device
The term "hacking" is used to describe a broad variety of activities. Here, we discuss what it means within the cybersecurity field as well as within lay usage.
Full device compromise
When security researchers talk about a "hacked device", they are most often referring to subverting the OS and taking full control over the software on the device. For phones, a compromised phone is “rooted” for Android or “jailbroken” for Apple. When this happens, the person who is doing the hacking can install software that deviates from the original software’s intended functionality. For example, a compromised OS could access data of all apps installed and used on the device.
Jailbreaking or rooting, even when possible, requires physical access to the target device. Remote compromises, where an attacker sends a specially crafted message to compromise a device’s OS, do exist, but are in general inaccessible to the general public and, by extension, abusers. As a rule of thumb, for well-protected targets (popular OS’s with good security teams, such as Apple, Android, and Windows), discovery of remotely exploitable software vulnerabilities requires extensive resources to develop or buy.
While the news may breathlessly cover the latest “zero-day” vulnerabilities and hacks, it is increasingly only feasible for specialized security teams of security experts that only do business with companies and governments. Of course, in rare cases an abuser may themselves be an employee at such a firm or otherwise have the rarified expertise to perform remote exploits. Even here there are many limits to their “powers” and the threat can often be mitigated via a reset of a device and updating it to the most recent version of the software.
Takeaway: Full device (OS-level) compromise can be fixed via a factory reset or purchasing a new device, but this may not fix spyware or unauthorized device access.
Spyware, stalkerware, and unauthorized device access
In summary, hacking a device requires rare, expert knowledge, exceptionally so for fully updated software. On the other hand, colloquial usage of the term 'hacking' often refers to covertly gaining access to a device, and this is often how clients use this term. This type of 'hacking' just requires the ability to unlock a device. For some devices, anyone can unlock them depending on how they are configured, such as a laptop or phone that does not require a password or biometric to awaken it from sleep mode. Security practitioners refer to the means by which access is granted only to certain individuals as an authentication mechanism. Passwords are the traditional authentication mechanism, but increasingly devices use biometrics (fingerprints or face scans).
Unlike device hacking, the ability of an abuser to unlock a device is a widespread situation in IPV. When an abuser has access to a device, they can unlock it and then can utilize it via standard user interfaces (UIs) -- the same features and functionality that a regular user utilizes. Sometimes people refer to abusers in this case as UI-bound: their bad actions are limited to the functionality the device provides.
Unfortunately, almost every device has functionality that can be repurposed for tech abuse. Two high-level categories for repurposing include reconfiguring existing features and adding new apps.
Examples of reconfiguration are plentiful. For example, an abuser might change the settings for authentication mechanisms by resetting a password or enabling their fingerprint to unlock the device, or they might change the settings for OS-provided location tracking features or another location tracking app.
Abusers may also add new, unwanted apps to a target survivor’s device. A class that people talk about routinely is IPV spyware (also called stalkerware), which in some cases can monitor the device’s use quite pervasively, including location tracking and, in some cases, theft of information from the device such as text messages.
UI-bound abusers who install unwanted apps or reconfigure the OS or apps can be damaging, and will often be called “hacking” by clients. While it’s fine to meet clients where they are in terms of terminology, it’s good to keep in mind that the more common UI-bound adversaries do not achieve full device compromise. This has implications for abuser's capabilities and remediations: removing an unwanted app prevents its use, and changing an unwanted OS configuration setting fixes it. Importantly, resetting a device will help with a “hacked (fully compromised) device” but may not be effective in getting rid of an unwanted app if it is downloaded to the client's account (e.g. an iCloud or Google Play account), as the unwanted app may simply be re-downloaded when the client logs back in.
Takeaway: Abusers with access to a device can install apps or reconfigure existing tools to hinder survivor safety. Helping a client remove unwanted apps or change configurations can mitigate these threats, but a full-factory reset or new device may not be effective.
Electronic monitoring devices
Clients are often understandably concerned that an abuser may have placed external devices intended to monitor their activities in their personal space without the client's knowledge or consent. These often fall into the categories of either "Internet of Things" devices or Bluetooth trackers, including GPS trackers planted in a vehicle. Uncovering such devices may be difficult, as such devieces are often very small or well-hidden. The challenges to discovering physical devices is compounded by clinic practices that prevent technology consultants from helping clients physically search their personal spaces.
However, some basic understanding of how such devices work can help technology consultants remotely screen for them and may help assuage client concerns.
Understanding how recording devices, such as mini-cameras or hidden microphones, operate can help with identifying creative solutions for finding them. Such devices require:
a memory or storage unit to store recordings,
a power source (either a battery or an electric outlet),
and, frequently, a network connection to transmit the recording (either a short-range Bluetooth connection, or a long-range WiFi or cabled Internet connection).
If not connected to a network, then the abuser must be able to physically access the device in order to see the data that is stored in it.
Bluetooth connections also require the abuser to be within a few hundred feet of the device in order to 'link' with the device.
Given the amount of data that is picked up by a camera or microphone and the limited range of Bluetooth, a recording device will most likely be connected to the Internet, usually via the client's home WiFi. The client can therefore be instructed to check which devices are connected to their Internet account (this can be through a third-party app such as Fing, or through their account with their provider). Likewise, changing their Internet access password will 'knock off' any recording devices. Similarly, since recording is a power-intensive operation, many devices will be plugged into an outlet, and consultants can also advise clients to check all outlets for devices.
Takeaway: Recording devices either require an Internet connection or for the abuser to have physical proximity to the client, so many devices can be disabled by changing the client's Internet password.
GPS and Location Tracking Devices
For clients who are concerned that a device is being used to track their location, it can be difficult to ascertain whether such a tracking device is being used. Tracking devices, unlike recording devices, do not have substantial power requirements, and can last for years on a battery. Scanning for tracking devices is also difficult, as a tracking device can broadcast its location to the abuser without an Internet connection and without being actively connected.
Bluetooth scanners will surface tracking devices, but will also likely surface many benign false positives without enough information to distinguish from actual threats. Clients and consultants can work together to develop creative safety plans, including seeking out information about the most common tracking devices and how to detect them, but until universal tracker detection technology is available, there are limited options for addressing survivor concerns about tracking devices. Given the rapidly changing nature of this area of technology, we refrain from offering more specific advice.
Accounts and their security
Online accounts are key components of our digital lives. Email, social media, work websites, banking accounts, and so much more --- each has an associated account associated. Creating and using an online account typically requires the use of a username and a method of authentication to verify who is accessing the account, such as a password. Your username is often, but not always, an email address.
Accounts are a prime target for abusers, likely due to the level of intrusiveness access can give them and for the often ease of remote compromise. Unlike devices, accounts are designed so that one can access them from anywhere, on any device --- assuming one can authenticate themselves.
Authenticating online accounts
Authentication mechanisms for accounts are still predominantly password-based, though this has been evolving. We now see several forms of authentication:
Password/PIN authentication in which a sequence of characters or numbers grants access. Modern accounts may have requirements about the 'strength' of the password, such as minimum length, types of included characters, etc.
Email-based authentication in which a challenge (usually a numerical code or a URL to click) is sent to an email address associated with the account.
Phone-based authentication in which a code is texted to a phone number or delivered by an automated phone call.
Personal knowledge authentication in which you must provide answers to questions such as “what is your mother’s maiden name?” or “what city were you born in?”
Authenticator apps or previously registered “trusted devices” in which a challenge is sent or which shows a prompt to allow access from a certain device.
Biometric authentication in which an application grants access by recognizing the user's face or fingerprint.
Multi-factor authentication (MFA) or 2-Factor Authentication (2FA) in which an account requires a user to pass through two or more of the above authentication mechanisms, such as entering the correct password and receiving a verification code via text message. Most often, only two forms are required, hence the special case of 2FA.
Authentication as an abuse mechanism
Accounts allow users to configure how they authenticate their account, such as by setting which biometrics are recognized or which devices are 'trusted devices'. These configurations are often where problems arise. Abusers may reconfigure authentication approaches, by, for example, using their access to turn off 2FA, or by adding their phone number or email as a trusted device. Reconfiguring authentication settings can grant covert access to a wealth of information, or even lock the survivor out of their own accounts. Consequently, researching the authentication settings for online accounts that are a cause for concern and reviewing them together can be helpful for many survivors.
Identifying unauthorized, authenticated log-ins
It’s helpful to understand a bit more about how logins work. Generally, users can login to a service via a web browser (by typing the URL into e.g. Safari or Chrome) or through a dedicated app solely for that service (such as the Venmo app). In either case, after a successful log-in, the browser or app stores a small piece of information. This small piece of information is called a cookie. It is used to identify that this browser or app was recently authenticated, so that the user does not need to keep authenticating.
Some services have features to help users try to determine what browsers or apps have recently logged in, and which can still access the service. The web service keeps a list of which apps/browsers they’ve given a cookie to, and then shows that list to the user. This is quite valuable since it can provide insight into who is accessing a service.
For example, if the survivor sees a device logged in that matches the abuser’s (e.g.: a particular version of a phone) this may be evidence that the abuser has accessed it. Sometimes these lists also show the time and approximate location of the device when login occurred. Documenting such information by downloading it or taking a screenshot may be helpful for survivors who are involved in legal proceedings. Even if the survivor is not attempting to gather evidence, it can help with safety planning to understand what information the abuser had access to and when they had it.
Finally, most services, though not all, have one or more mechanisms for account recovery. This is meant to allow the legitimate user to regain access to their account should they forget their password or otherwise have problems authenticating in the normal way. Account recovery mechanisms typically involve sending a code to a designated email, phone number, or device; the term “recovery phone” or “recovery email” is how configurations often describe these. Security questions are sometimes also a way to recover an account.
Account recovery is a frequent “backdoor” for accessing an account. It can be useful to check an account’s settings to ensure that recovery emails/phone numbers are controlled by the survivor, rather than the abuser, or to help them navigate the account recovery process to regain access.
Security tools to mitigate harassment
Many common tech safety problems don’t involve the abuser having to obtain access to a client’s devices or accounts. Instead, they involve, for example, posting harmful content online from accounts setup and controlled by the abuser.
Tools available to clients and those working on their behalf include:
Blocking mechanisms that allow a client to prevent content/accounts from interacting with them. For example, most phones allow blocking particular numbers and social media often can block particular accounts from sending content to your account.
This may not prevent the abuser from using a spoofed account or phone number, such as a fake account or an app that allows the call to seem like it's coming from a different, even trusted, phone number.
Screening mechanisms such as using virtual phone numbers (e.g. Google Voice) as a 'safe' number or coordinating pass phrases with trusted contacts.
Reporting content to companies. Many companies allow reporting content or accounts to them, particularly in the context of social media. Whether or not content/accounts will be removed or banned is often up to the peculiarities of company policy and their implementation of that policy. Some companies have 'trusted flaggers' statuses that can clinics can attain to escalate content reports.
Takedown requests are a special type of report asking a Internet service provider or search engine to remove content from appearing. There are also professional services that issue takedown requests, and clinics may develop a relationship with them allowing their clients to obtain free services (CETA, at the time of writing, is able to provide a limited number of licenses to clients for such a service). In other cases, it helps to have lawyers assist with this effort.
Being prepared to help with unfamiliar technology
No consultant can be familiar with all the various kinds of technology that will arise in discussion with a client. This is true even for technology experts -- the number of possible apps, devices, or other artifacts is too large, and rapidly changing. Coping with the diversity and evolution of technology is a key challenge for clinics. Here we provide some advice for structuring a clinic to countenance this challenge.
Normalize the necessity of researching problems: A clinic can normalize the need for consultants to look up information, either in the moment while helping a client or doing research between appointments. This includes telling clients that the consultant needs to do some research to try to help answer the question.
Assess reputability of advice: A lot of online advice is bad. Clinics should try to cultivate a sensibility about what are trusted sources of information and how sources of information map (or not) onto typical abuse threat models. This can be useful not only for consultants but also for them to help inform clients they serve about good sources of advice.
Develop connections within the tech community: A key resource for research can be a network of people to which technical questions can be asked. Clinics might consider recruiting tech workers as consultants, and/or seek out connections with tech workers to be available as resources for the client. CETA for example uses a chat platform with a broad range of technologists who have made themselves available to answer questions about technology issues from CETA consultants (without disclosing identifying information).
Document common issues and solutions: Writing down common situations, and, ideally, sharing them with other support organizations can help build up a body of knowledge. For example, CETA has a number of public guides for common issues on its resource page along with other internal tools like the Technology Assessment Questionnaire and Tech Safety Checklists referenced in Chapter 7, available in Appendix II: Resources.