Why tech worker resistance is crucial to preventing large-scale human rights abuses in the U.S.

Today I signed the “Never Again” pledge along with hundreds of other tech workers. We pledged to take a variety of concrete actions to stop the U.S. government from using databases to target people for human rights abuses. One of those actions is “We refuse to participate in the creation of databases of identifying information for the United States government to target individuals based on race, religion, or national origin.”

Some people have criticized this pledge as empty because they claim these databases already exist and are available to anyone with the money to buy them. They often back this up with a screenshot from a commercial marketing data broker listing a few hundred thousand phone numbers or emails. They argue that tech workers are just fooling themselves by thinking that their actions as individuals matter now, after these databases have been created.

I believe that the resistance of individual tech workers against the creation and use of databases like this is highly relevant. I’ll briefly summarize my argument, then I’ll tell you my personal experience of working with one of these databases. I will finish up by going into detail about the lessons I learned from that experience.

TL;DR version: Many commercial databases are low quality and barely usable for the purposes of large-scale human rights abuses like mass deportations by race, religion, or national origin. Higher quality databases are expensive to create and update, and tend to be highly protected. Any existing databases require maintenance, support, and tools to keep them up to date and make them usable. All of these things are provided by tech workers. By refusing to do these things, we can materially block, slow down, and frustrate attempts to commit large-scale human rights abuses by the U.S. government.

Now for my personal experience with one of these databases. A few months ago, I volunteered with a political organization. My job was to send text messages to thousands of voters of a particular ethnicity in swing states order to encourage them to vote in the U. S. presidential election. To do this, I used a computer-based tool to send and reply to text messages. The list of phone numbers we sent text messages to was bought from one of the commercial marketing data brokers. The text messages we sent included the purported first name of the person owning the phone number.

The first thing I noticed is that the most common reply we got (after no reply at all) was “I think you have the wrong number.” Many of the people with these phone numbers did not even match the names that we were given to go along with them, and if the people owning them were our target ethnicity and location it was only by accident. I also noticed that a lot of the people we were texting were not of the ethnicity that we were targeting. We had one set of text messages that asked this question explicitly, but people also volunteered this information in their replies (sometimes using abusive Twitter hashtags).

We almost immediately started having problems with the software we were using. Some of the problems were volunteers having difficulty understanding how to use the software, but there were also out-and-out bugs that caused serious problems that couldn’t be fixed by users. The software programmers who wrote the text messaging tools had to make emergency fixes and edit the databases during our volunteer session. We ended up switching software tools entirely at one point. As one of the few tech-savvy volunteers, I spent a lot of time helping other volunteers figure out how to use the software and work around bugs.

This is just one person’s experience working with one database of people by ethnicity, and I’m sure there are better ones out there. But I also have over ten years of experience with data, software, and Murphy’s Law. Here are my beliefs about the role of tech workers in using existing databases of people by race, religion, or national origin:

  1. Many commercial databases are incomplete and error-riddled. These databases leave out a lot of people who should be in them, and include a lot of people who shouldn’t. This is fine if you are sending a mass marketing email, or targeting an Facebook ad. But if you want to send thugs to the doors of every person in that group (and not to people who aren’t in that group), you’ll need to put in a lot of work. Correcting these errors is extremely expensive because it takes human work and intelligence. (For example, the U. S. Census employs hundreds of thousands of temporary workers to create its gargantuan dataset.) It will require the cooperation of many individuals to make these databases usable for the purpose of deportation or other violations of human rights. We can refuse to do that.
  2. Databases of personally identifiable information need to be updated frequently. I’ve moved over a dozen times in my life. The DMV’s record for my address has been wrong more years than it’s been right. Gamergate can’t even get my address and phone number correct when I post it on my company web site. Updating these databases to reflect moves, changes in locations, new phone numbers, changes of religion, marriages, births, deaths, etc. will take ongoing support – from tech workers. We can refuse to do that.
  3. Higher quality databases tend to already have systems in place to make them harder to abuse. For example, the personally identifiable information in the U.S. Census data is protected by federal laws and every person who has access to it has sworn for life to protect the confidentiality of that data. Will that prevent it from being misused? Ha ha, no – but it outlines the importance of individuals refusing to be complicit in human rights abuses. A limited number of people can turn this data over for use by a human rights abusing regime, and they have already thought deeply about their personal responsibility in this situation. They can refuse to do that, and we can stand in solidarity with them.
  4. Databases of millions of people require tech support to use. Even if we had access to a magical database that updated itself with the name, location, ethnicity, religion, and immigration status of every human in the U.S., we would still need tech workers to build and maintain and run the tools to use that data. We would need tech support to help people use the tools. We would need technical writers to document the tools. We can refuse to do that.

I’m not one of the people who seriously believes that the cost of deporting millions of people will deter the Trump administration from doing it (one easy way to reduce costs: don’t deport people humanely). But history tells us that, whether you do it humanely or not, this kind of large-scale human rights abuse requires huge numbers of people working together with the full knowledge that they are committing human rights abuses. Tech workers are a crucial part of this system, and if enough of them refuse to do that work, we can have an impact on history.

In the end though, I believe the indirect effects of this pledge may be even more powerful than the direct effects. Tech workers are notoriously difficult to organize, so when we do act in concert, it’s a newsworthy event. In my experience, tech company executives will pay close attention to any cause powerful enough to get tech workers to pledge solidarity with each other and with the most vulnerable in society.

DARPA contracts vs. dreams

A few years ago, I dreamed that I was walking into a giant underground bunker with a bunch of other scientists. Through crystal-clear dream logic, I immediately understood that I had joined an NSA project to re-implement modern computer hardware and software, starting with individual transistors.

In my dream, the NSA was worried about a Thompson-style backdoor in their hardware, even the hardware they designed and implemented themselves. Even visual inspection of the hardware design wouldn’t necessarily reveal the backdoor, because the hardware itself would build the backdoor into any new hardware designed. (This isn’t feasible in exactly this form in reality, but it was a dream, give me a break.)

So obviously, we had to re-implement computers from scratch, starting with something too small to have a backdoor – a single transistor. The thing I remember most vividly from this dream is how happy I was, getting to bootstrap computers from the transistor up – yay! Even if I have to live underground for 5 years!

I was reminded of this dream while reading Charlie Stross’s plea for crazy military ideas he hadn’t already heard of. Someone pointed out the DARPA Trust in Integrated Circuits program (described in IEEE as “The Hunt for the Kill Switch“). Some American general noticed that we were building fighter jets out of foreign computer chips and convened a panel which concluded that we had to spend a lot of money trying to find backdoors and kill switches in hardware.

One can consider the F00F bug to be a version of the kill switch. If Intel can’t find unintentional “kill switches” in its chips, what hope does some DARPA contractor have of finding an intentionally created and hidden kill switch or backdoor? I think my dream (literally and figuratively) of a secret underground NSA computing bootstrap project is more feasible, or at least more likely to succeed.

Does anyone have interesting links or ideas related to detecting (or planting) Thompson-style backdoors? One could imagine the techniques would be transferable in some way to hardware, given that hardware design is done almost entirely in hardware description languages – software, of a sort.