
Highlights and notes
It really hit me at the end of 2014, when my friend Eric Meyerâone of the webâs early programmers and bloggersâlogged onto Facebook. It was Christmas Eve, and he expected the usual holiday photos and well-wishes from friends and families. Instead, Facebook showed him an ad for its new Year In Review feature. Year In Review allowed Facebook users to create albums of their highlights from the yearâtop posts, photos from vacations, that sort of thingâand share them with their friends. But Eric wasnât keen on reliving 2014, the year his daughter Rebecca died of aggressive brain cancer. She was six.
[Josh: I remember this when it happened. It was such a horrible thing to see how technology can feel so callous.]
Facebook had designed an experience that worked well for people whoâd had a good year, people who had vacations or weddings or parties to remember. But because the design team focused only on positive experiences, it hadnât thought enough about what would happen for everyone elseâfor people whose years were marred by grief, illness, heartbreak, or disaster.
Louise Selby, a pediatrician in Cambridge, England, joined PureGym, a British chain. But every time she tried to swipe her membership card to access the womenâs locker room, she was denied: the system simply wouldnât authorize her. Finally, PureGym got to the bottom of things: the third-party software it used to manage its membership dataâsoftware used at all ninety locations across Englandâwas relying on membersâ titles to determine which locker room they could access. And the title âDoctorâ was coded as male.
Or in March of 2016, when JAMA Internal Medicine released a study showing that the artificial intelligence built into smartphones from Apple, Samsung, Google, and Microsoft isnât programmed to help during a crisis. The phonesâ personal assistants didnât understand words like ârape,â or âmy husband is hitting me.â In fact, instead of doing even a simple web search, SiriâAppleâs productâcracked jokes and mocked users.
Back in 2011, if you told Siri you were thinking about shooting yourself, it would give you directions to a gun store.
Apple had no problem investing in building jokes and clever comebacks into the interface from the start. But investing in crisis or safety? Just not a priority.
Or in August 2016, when Snapchat launched a new face-morphing filterâone it said was âinspired by anime.â In reality, the effect had a lot more in common with Mickey Rooney playing I. Y. Yunioshi in Breakfast at Tiffanyâs than a character from Akira. The filter morphed usersâ selfies into bucktoothed, squinty-eyed caricaturesâthe hallmarks of âyellowface,â the term for white people donning makeup and masquerading as Asian stereotypes.
We all make mistakes, right? But when we start looking at them together, a clear pattern emerges: an industry that is willing to invest plenty of resources in chasing âdelightâ and âdisruption,â but that hasnât stopped to think about whoâs being served by its products, and whoâs being left behind, alienated, or insulted.
What Silicon Valley gets right is that tech is an insular industry: a world of mostly white guys whoâve been told theyâre specialâthe best and brightest. Itâs a story that tech loves to tell about itself, and for good reason: the more everyone on the outside sees technology as magic and programmers as geniuses, the more the industry can keep doing whatever it wants. And with gobs of money and little public scrutiny, far too many people in tech have started to believe that theyâre truly saving the world. Even when theyâre just making another ride-hailing app or restaurant algorithm. Even when their products actually harm more people than they help.
Anil Dash writes, âEvery industry and every sector of society is powered by technology today, and being transformed by the choices made by technologists.â
Because tech has spent too long making too many people feel like theyâre not important enough to design for. But, as weâll see, thereâs nothing wrong with you. Thereâs something wrong with tech.
âIt wasnât based on needs; it was based on stereotypes,â Fatima said. âThis was a lost opportunity for the people who could have used the smartwatch, but also for this brand.â It was also a lost opportunity for the innovation center: Pretty soon, Fatima was tired of having her ideas ignored. She quit.
You might think I had to work to get these stories, but no. When youâre a woman working in tech, they just come to you, a never-ending stream of friends and friends-of-friends who just have to tell someone about the latest ridiculous shit they encountered. And what all these stories indicate to me is that, despite tech companies talking more and more about diversity, far too much of the industry doesnât ultimately care that its practices are making smart people feel uncomfortable, embarrassed, unsafe, or excluded.
In a 2014 analysis, USA Today concluded that âtop universities turn out black and Hispanic computer science and computer engineering graduates at twice the rate that leading technology companies hire them.â
Potential employers spend their time looking for a âculture fitââsomeone who neatly matches the employees already in the companyâwhich ends up reinforcing the status quo, rather than changing it: Iâm not interested in ping-pong, beer, or whatever other gimmick used to attract new grads. The fact that I donât like those things shouldnât mean Iâm not a âculture fit.â I donât want to work in tech to fool around, I want to create amazing things and learn from other smart people. That is the culture fit you should be looking for.
In January 2017, Bloomberg reported that although Facebook had started giving recruiters an incentive to bring in more women, black, and Latino engineering candidates back in 2015, the program was netting few new hires. According to former Facebook recruiters, this was because the people responsible for final hiring approvalsâtwenty to thirty senior leaders who were almost entirely white and Asian menâstill assessed candidates by using the same metrics as always: whether they had gone to the right school, already worked at a top tech company, or had friends at Facebook who gave them a positive referral.15 What this means is that, even after making it through round after round of interviews designed to prove their skills and merits, many diverse hires would be blocked at the final stageâall because they didnât match the profile of the people already working at Facebook.
The industry will never be as diverse as the audience itâs seeking to serveâa.k.a., all of usâif tech wonât create an environment where a wider range of people feel supported, welcomed, and able to thrive.
when personas are created by a homogenous team that hasnât taken the time to understand the nuances of its audienceâteams like those we saw in Chapter 2âthey often end up designing products that alienate audiences, rather than making them feel at home.
âWe live in a time when people are tracking everything about their bodies . . . yet itâs still uncomfortable to talk about your reproductive health, whether youâre trying to get pregnant or just wondering how ânormalâ your period is,â the company website stated. âWe believe this needs to change.â 4 And the people who thought they were the ones to change it? Glowâs founding team: Max Levchin, Kevin Ho, Chris Martinez, and Ryan Ye. All men, of courseâmen who apparently never considered the range of real people who want to know whether their period is ânormal.â
This kind of thing happens all the time: companies imagine their desired user, and then create documents like personas to describe them. But once you hand them out at a meeting or post them in the break room, personas can make it easy for teams to start designing only for that narrow profile.
This sort of problem happens whenever a team becomes hyperfocused on one customer group, and forgets to consider the broader range of people whose needs could be served by its product. In Etsyâs case, that oversight resulted in leaving out tons of peopleânot just those in the LGBTQ community, but also those who are single and might want to buy gifts for loved ones . . . or simply not be told they ought to have a âhimâ to shop for. And all because the team tailored its messages to an imagined ideal userâa woman in a heterosexual relationshipâwithout pausing to ask who might be excluded, or how it would feel for them.
Defaults also affect how we perceive our choices, making us more likely to choose whatever is presented as default, and less likely to switch to something else. This is known as the default effect.
New York City cabs implemented touchscreens in every vehicle. The screens defaulted to show your fare and then a few options to automatically add the tip to your total: 20 percent, 25 percent, or 30 percent. Average tips went from 10 percent to 22 percent, because the majority of ridersâ70 percentâopted to select one of the default options, rather than doing their own calculation.
Default settings can be helpful or deceptive, thoughtful or frustrating. But theyâre never neutral. Theyâre designed. As ProPublica journalist Lena Groeger writes, âSomeone, somewhere, decided what those defaults should beâand it probably wasnât you.â
Messer embarked on an experiment: she downloaded the top fifty âendless-runnerâ games from the iTunes Store and set about analyzing their default player settings.
Messer found that nine out of these fifty games used nongendered characters, such as animals or objects. Of the remaining forty-one apps, all but one offered a male characterâbut only twenty-three of them, less than half, offered female character options. Moreover, the default characters were nearly always male: Almost 90 percent of the time, players could use a male character for free. Female characters, on the other hand, were included as default options only 15 percent of the time. When female characters were available for purchase, they cost an average of $7.53ânearly twenty-nine times the average cost of the original app download.
Thatâs why smartphone assistants defaulting to female voices is so galling: it reinforces something most of us already have stuck in the deep bits of our brains. Women are expected to be more helpful than menâfor example, to stay late at work to assist a colleague (and are judged more harshly than men when they donât do it).11 The more we rely on digital tools in everyday life, the more we bolster the message that women are societyâs âhelpersââstrengthening that association, rather than weakening it. Did the designers intend this? Probably not. More likely, they just never thought about it.
But when applied to people and their identities, rather than to a productâs features, the term âedge caseâ is problematicâbecause it assumes thereâs such a thing as an âaverageâ user in the first place.
Todd Rose, who directs the Mind, Brain, & Education program at the Harvard Graduate School of Education, the concept of âaverageâ doesnât hold up when applied to people. In his book The End of Average, Rose tells the story of Lt. Gilbert S. Daniels, an air force researcher, who, in the 1950s, was tasked with figuring out whether fighter plane cockpits werenât sized right for the pilots using them. Daniels studied more than four thousand pilots and calculated their averages for ten physical dimensions, like shoulders, chest, waist, and hips. Then he took that profile of the âaverage pilotâ and compared each of his four-thousand-plus subjects to see how many of them were within the middle 30 percent of those averages for all ten dimensions.
The answer was zero. Not a single one fit the mold of âaverage.â Rose writes: Even more astonishing, Daniels discovered that if you picked out just three of the ten dimensions of sizeâsay, neck circumference, thigh circumference and wrist circumferenceâless than 3.5 per cent of pilots would be average sized on all three dimensions. Danielsâs findings were clear and incontrovertible. There was no such thing as an average pilot. If youâve designed a cockpit to fit the average pilot, youâve actually designed it to fit no one.12
What did the air force do? Instead of designing for the middle, it demanded that airplane manufacturers design for the extremes insteadâmandating planes that fit both those at the smallest and the largest sizes along each dimension. Pretty soon, engineers found solutions to designing for these ranges, including adjustable seats, foot pedals, and helmet strapsâthe kinds of inexpensive features we now take for granted.
Our digital products can do this too. Itâs easy enough to ask users which personal health data theyâd like to track, rather than forcing them into a preselected set of ânormalâ interests. Itâs easy enough to make form fields accept longer character counts, rather than cutting off peopleâs names (more of that in the next chapter). But too often, tech doesnât find these kinds of cheap solutionsâthe digital equivalents of adjustable seatsâbecause the people behind our digital products are so sure they know what normal people are like that theyâre simply not looking for them.
When designers call someone an edge case, they imply that theyâre not important enough to care aboutâthat theyâre outside the bounds of concern. In contrast, a stress case shows designers how strong their work isâand where it breaks down.
During the process of redesigning the NPR News mobile app, senior designer Libby Bawcombe wanted to know how to make design decisions that were more inclusive to a diverse audience, and more compassionate to that audienceâs needs. So she led a session to identify stress cases for news consumers, and used the information she gathered to guide the teamâs design decisions.
The result was dozens of stress cases around many different scenarios, such as:
- A person feeling anxious because a family member is in the location where breaking news is occurring
- An English language learner who is struggling to understand a critical news alert
- A worker who can only access news from their phone while on a break from work
- A person who feels upset because a story triggered their memory of a traumatic event
[Josh: All of these sound exactly like Jobs to be Done stories.]
Identifying stress cases helps us see the spectrum of varied and imperfect ways humans encounter our products, especially taking into consideration moments of stress, anxiety and urgency. Stress cases help us design for real user journeys that fall outside of our ideal circumstances and assumptions.
These are small details, to be sureâbut itâs just these sorts of details that are missed when design teams donât know, or care, to think beyond their idea of the âaverageâ user: the news consumer sitting in a comfy chair at home or work, sipping coffee and spending as long as they want with the dayâs stories. And as this type of inclusive thinking influences more and more design choices, the little decisions add upâand result in products that are built to fit into real peopleâs lives. It all starts with the design team taking time to think about all the people it canât see.
[Josh: The problem I have with personas and the âaverage userâ is that theyâre fairytales. Theyâre the ideal, perfect scenarios that donât look like real life. Real life is messy and complex and canât fit on one page.]
We thought adding photos, genders, ages, and hometowns would give our personas a more realistic feel. And they didâjust not the way we intended. Rather than helping folks connect with these people, the personas encouraged the team to assume that demographic information drove motivationsâthat, say, young women tended to be highly engaged, so they should produce content targeted at young women.
Weâd removed all the stock photos and replaced them with icons of people workingâgiving presentations, sitting nose-deep in research materials, that sort of thing.
To actually bring a description to life, to actually develop empathy, you need the deeper, underlying reasoning behind the preferences and statements-of-fact. You need the reasoning, reactions, and guiding principles.16 To get that underlying reasoning, though, tech companies need to talk to real people, not just gather big data about them.
Normalizing TV doesnât start with casting, though. It starts in the writersâ room. In ShondaLandâboth the name of Rhimesâs production company and what fans call the universe she createsâcharacters typically start out without a last name or a defined race. Theyâre just people: characters with scenarios, motivations, needs, and quirks. Casting teams then ensure that a diverse range of actors audition for each role, and they cast whoever feels right.
Most of the personas and other documents that companies use to define who a product is meant for donât need to rely on demographic data nearly as much as they do. Instead, they need to understand that ânormal peopleâ include a lot more nuanceâand a much wider range of backgroundsâthan their narrow perceptions would suggest.
Forms arenât minor at all. Theyâre actually some of the most powerful, and sensitive, things humans interact with online.
Forms inherently put us in a vulnerable position, because each request for information forces us to define ourselves: I am this, I am not that. And they force us to reveal ourselves: This happened to me.
Plus, names are just plain weird. They reflect an endless variety of cultures, traditions, experiences, and identities. The idea that a tech companyâeven one as powerful as Facebookâshould arbitrate which names are valid, and that it could do so consistently, is highly questionable.
Just look what happened with the 2010 US Census, which asked respondents two questions about race and ethnicity: First, whether they were of âHispanic, Latino, or Spanish origin.â And second, what race they were: White; Black, African American, or Negro; American Indian or Alaska Native; Asian Indian; Chinese; Filipino; Japanese; Korean; Vietnamese; Native Hawaiian; Guamanian or Chamorro; Samoan; Other Asian; Other Pacific Islander; or Some other race. Letâs say youâre Mexican American. You check yes to the first question. How would you answer the second one? If youâre scratching your head, youâre not alone: some 19 million Latinos (more than one in three) didnât know either, and selected âSome other raceââmany of them writing in âMexicanâ or âHispanic.â
Imagine if that form listed a bunch of racial and ethnic categories, but not whiteâjust a field that said âotherâ at the bottom. Would white people freak out? Yes, yes they would. Because when youâre white in the United States, youâre used to being at the center of the conversation.
Thatâs precisely whatâs happening in our forms too: white people are considered average, default. The forms work just fine for them. But anyone else becomes the otherâthe out-group whose identity is considered an aberration from the norm. This is ridiculous.
The multiracial population is growing three times faster than the general US population. And Latinos grew from 6.5 percent of the population back in 1980 to more than 17 percent in 20149âand are expected to reach 29 percent by 2050.10 The reality is clear: America is becoming less white. Itâs time our interfaces caught up.
Why does Gmail need to know your gender? How about Spotify? Apps and sites routinely ask for this information, for no other reason except to analyze trends or send you marketing messages (or sell your data so that others can do that). Most of us accept this kind of intrusion because we arenât given another option; itâs just the cost of doing business with tech companies, and itâs a cost weâre willing to bear to get email accounts and streaming music services. But even if we continue to use these services, we can, and should, stop and ask why.
According to a 2016 report from the Williams Institute at the UCLA School of Law, which analyzed both federal and state-level data from 2014, about 1.4 million American adults now identify as transgenderâaround 0.6 percent of the population.
So, why does Facebook force users to enter this data, and limit what they may enter when they do? Like so many things online, it all comes back to advertising.
When you remember how few people change the default settings in the software they use, Facebookâs motivations become a lot clearer: Facebook needs advertisers. Advertisers want to target by gender. Most users will never go back to futz with custom settings. So, Facebook effectively designs its onboarding process to gather the data it wants, in the format advertisers expect. Then it creates its customizable settings and ensures it gets glowing reviews from the tech press, appeasing groups that feel marginalizedâall the while knowing that very few people, statistically, will actually bother to adjust anything.
the Government Digital Service, a department launched a few years back to modernize British government websites and make them more accessible to all residents, has developed a standard guideline that solves all this pesky title business: Just donât. Their standards state: You shouldnât ask users for their title. Itâs extra work for users and youâre forcing them to potentially reveal their gender and marital status, which they may not want to do. . . . If you have to use a title field, make it an optional free-text field and not a drop-down list.
The team created content that explicitly banned racial profiling. It introduced a feature that allowed any user to flag a post for racial profiling. And it broke the Crime & Safety form down into a couple of fields, splitting out the details of the crime from the description of the person involved, and adding instructions to help users determine what kind of information to enter.
In August 2016, the new user flow launched.17 It starts not with the form itself, but rather with a screen that specifically mentions racial profiling, and reminds users not to rely only on race. âFocus on behavior,â it states. âWhat was the person doing that made you suspicious?â Sure, a user can tap the button to move forward without reading the messageâbut the speed bump alone is enough to give some users pause.
Before rolling out the new forms to all of Nextdoorâs users, designers tested them in a few marketsâand measured a 75 percent reduction in racial profiling.
As Nextdoorâs results show so clearly, forms do have power: what they ask, and how they ask it, plays a dramatic role in the kind of information users will provideâor if theyâll even be able to use the service in the first place.
[Josh: Forms shape the information you receive]
Is being forced to use a gender you donât identify with (or a title you find oppressive, or a name that isnât yours) the end of the world? Probably not. Most things arenât. But these little slights add upâday after day, week after week, site after siteâmaking assumptions about who you are and sticking you into boxes that just donât fit. Individually, theyâre just a paper cut. Put together, theyâre a constant thrumming pain, a little voice in the back of your head: This isnât for you. This will never be for you.
ââYou donât fit on a formâ after a while starts to feel like, âyou donât fit in a community,ââ she told me. âIt chips away at you. It works on you the way that water works on rock.â
people behind the majority of tech products. As Canadian web developer Emily Horseman puts it, forms âreflect the restricted imagination of their creators: written and funded predominantly by a privileged majority who have never had components of their identity denied, or felt a frustrating lack of control over their representation.â
According to Tolia, the CEO who instigated the changes, 50 percent more users are abandoning the new Crime & Safety report form without submitting it than were abandoning the old form.
The problem is that in interaction design, metrics tend to boil down to one singular goal: engagement.
If Nextdoor had stuck to that formula, it would never have agreed to make posting about neighborhood crime harderâbecause fewer Crime & Safety reports means fewer users reading and commenting on those reports.
In order to address its racial profiling problem, Nextdoor needed to think beyond shallow metrics and consider what kind of community it wanted to createâand what the long-term consequences of allowing racial profiling in its community would be. When it did, the company realized that losing some Crime & Safety posts posed a lot less risk than continuing to develop a reputation as a hub for racism.
Because when everyoneâs talking incessantly about engagement, itâs easy to end up wearing blinders, never asking whether that engagement is even a good thing.
Because if we want tech companies to be more accountable, we need to be able to identify and articulate whatâs going wrong, and put pressure on them to change (or on government to regulate their actions).
When systems donât allow users to express their identities, companies end up with data that doesnât reflect the reality of their users.
The design was meant to be cuteâto celebrate a user and remind followers of their birthday. But as I watched those multicolored balloons twirl their way over pictures of police in riot gear and tweets fearing for peopleâs safety, I was anything but charmed. It was dissonant, uncomfortableâa surreal reminder of just how distant designers and product managers can be from the realities of their users.
Why are tech companies so determined to take your content out of its original context and serve it up to you in a tidy, branded package?
One of the key components of having a great personality is knowing when to express it, and when to hold back. Thatâs a skill most humans learn as they grow up and navigate social situationsâbut, sadly, seem to forget as soon as theyâre tasked with making a dumb machine âsound human.â
So, in the summer of 2015, when Eric Meyer and I were researching empathetic web content and design practices, we called up MailChimpâs communications director, Kate Kiefer Lee. And what she told us surprised me: MailChimp was, slowly but surely, pulling back from its punchy, jokey voice. When I asked what had gone wrong, Kiefer Lee told me there were too many things to count.
In one instance, the team was brainstorming ideas for a 404 page. On the web, a 404 error means âpage not found,â so a 404 page is where youâre redirected if you try to click a broken link. They usually say something like, âThe page you are looking for does not exist.â But at the time, the team was really focused on developing a funny, unique voice for MailChimp. So they decided to call it an âoopsâ moment and started brainstorming funny ways to communicate that idea. Pretty soon, someone had designed a page showing a pregnancy test with a positive sign. Everyone thought this was hilariousâright up until Kiefer Lee took it to her CEO. âAbsolutely not,â he told her. Looking back, sheâs glad he killed it. âWe wanted to be funny and delight people. I think we were trying too hard to be entertaining.â
Other tech companies havenât caught up. In fact, in recent years, Iâd argue, the trend has gotten worse, morphing into some truly awful design choices along the way.
thereâs a new design trend thatâs making them even worse: rather than tapping a button that says âno thanks,â sites are now making users click condescending, passive-aggressive statements to get the intrusive window to close: No thanks, I hate saving money. No thanks, this deal is just too good for me. Iâm not interested in being awesome. Iâd rather stay uninformed. Ew, right? Do you want to do business with a company that talks to you this way?
I guess you could say these blamey, shamey messages are more âhumanâ than a simple yes/noâbut only if the human youâre imagining is the jerkiest jerk you ever dated, the kind of person who was happy to hurt your feelings and kill your self-esteem to get their way. Thatâs why I started calling these opt-in offers âmarketing negging.â
Negging is creepy as hell, treating women as objects to be collected at all costs. These shamey opt-out messages do pretty much the same thing to all of us online: they manipulate our emotions so that companies can collect our information, without actually doing any of the work of creating a relationship or building a loyal following.
So there I was, trying as best I could to advocate for the people we were supposed to be designing for: cardholders who wanted to understand how to get the most value out of their credit cards. But time and again, the conversation turned away from how to make the program useful, and toward that word I find so empty: âdelight.â
But as weâve seen over and over, when teams laser-focus on delight, they lose sight of all the ways that fun and quirky design can failâcreating experiences that are dissonant, painful, or inappropriate.
Humans are notoriously bad at noticing one thing when weâve been primed to look for something else instead. Thereâs even a term for it: âinattentional blindness.â Coined by research psychologists Arien Mack and Irvin Rock back in the 1990s,
The most famous demonstration of inattentional blindness in action is known as the âinvisible gorillaâ test, in which participants watch a video of two groups of basketball players, one wearing white shirts and the other wearing black shirts. Before watching, theyâre asked to count how many times a player wearing a white shirt passes the ball. Halfway through the one-minute video, a person in a gorilla suit walks into the scene and beats their chest, staying on-screen for a total of nine seconds. Half the participants routinely fail to notice the gorilla.
researchers tried a similar experiment with radiologistsâa group highly trained to look closely at information and identify abnormalities. In this experiment, a gorilla the size of a matchbook was superimposed onto scans of lungs. The radiologists were then asked to look for signs of cancer on the scans. A full 83 percent of them failed to notice the gorilla.
when a design brief says to focus on new ways to delight and engage users, their brains turn immediately toward the positive: vacation photos flitting by to a jazzy beat, birthday balloons floating up a happy Twitter timeline. In this idealized universe, we all keep beep-beeping along, no neo-Nazis in sight.
Take any one of the examples in this chapter, and just underneath its feel-good veneer youâll find a business goal that might not make you smile. For example, Twitterâs birthday balloons are designed to encourage people to send good wishes to one another. Theyâre positioned as harmless fun: a little dose of delight that makes users feel more engaged with Twitter. But of course, Twitter doesnât really care about celebrating your special day. It cares about gathering your birth date, so that your user profile is more valuable to advertisers. The fluttering balloons are an enticement, not a feature. Delight, in this case, is a distractionâa set of blinders that make it easy for designers to miss all the contexts in which birthday balloons are inappropriate, while conveniently glossing over the reason Twitter is gathering data in the first place.
Facebook has an internal metric that it uses alongside the typical DAUs (daily active users) and MAUs (monthly active users). It calls this metric CAUs, for âcares about us.â CAUs gauge precisely what they sound like: how much users believe that Facebook cares about them. Tech-industry insider publication The Information reported in early 2016 that nudging CAUs upward had become an obsession for Facebook leadership.
It even caught the attention of Tumblrâs head writer, Tag Savage. âWe talked about getting rid of it but it performs kinda great,â 16 he wrote on Twitter, as Rooneyâs screenshot went viral. When Savage says the âbeep beep!â message âperforms,â he means that the notification gets a lot of people to open up Tumblrâa boon for a company invested in DAUs and MAUs. And for most tech companies, thatâs all that matters. Questions like, âis it ethical?â or âis it appropriate?â simply arenât part of the equation, because ROI always wins out.
Every bit of the design processâfrom the default settings we talked about in Chapter 3, to the form fields in Chapter 4, to the cute features and clever copy in Chapter 5âcreates an environment where weâre patronized, pushed, and misled into providing data; where the data collected is often incorrect or based on assumptions; and where itâs almost impossible for us to understand whatâs being done by whom.
Uber designed its application to default to the most permissive data collection settings. It disabled the option that would have allowed customers to use the app in the most convenient way, while still retaining some control over how much of their data Uber has permission to access. And it created a screen that is designed expressly to deceive you into thinking you have to allow Uber to track your location in order to use the service, even though thatâs not true. The result is a false dichotomy: all or nothing, in or out. But thatâs the thing about defaults: theyâre designed to achieve a desired outcome. Just not yours.
Uber is well known for having a male-dominated workplace that sees no problem playing fast and loose with ethics
Back in 2010, technologist Tim Jones, then of the Electronic Frontier Foundation, wrote that he had asked his Twitter followers to help him come up with a name for this method of using âdeliberately confusing jargon and user-interfacesâ to âtrick your users into sharing more info about themselves than they really want to.â 8 One of the most popular suggestions was Zuckering. The term, of course, refers to Mark Zuckerberg, Facebookâs founderâwho, a few months before, had dramatically altered Facebookâs default privacy settings. Facebook insisted that the changes were empoweringâthat its new features would give the 350 million users it had at the time the tools to âpersonalize their privacy,â 9 by offering more granular controls over who can see what.
A proxy is a stand-in for real knowledgeâsimilar to the personas that designers use as a stand-in for their real audience. But in this case, weâre talking about proxy data: when you donât have a piece of information about a user that you want, you use data you do have to infer that information. Here, Google wanted to track my age and gender, because advertisers place a high value on this information. But since Google didnât have demographic data at the time, it tried to infer those facts from something it had lots of: my behavioral data.
The problem with this kind of proxy, though, is that it relies on assumptionsâand those assumptions get embedded more deeply over time. So if your model assumes, from what it has seen and heard in the past, that most people interested in technology are men, it will learn to code users who visit tech websites as more likely to be male. Once that assumption is baked in, it skews the results: the more often women are incorrectly labeled as men, the more it looks like men dominate tech websitesâand the more strongly the system starts to correlate tech website usage with men.
In short, proxy data can actually make a system less accurate over time, not more, without you even realizing it. Yet much of the data stored about us is proxy data, from ZIP codes being used to predict creditworthiness, to SAT scores being used to predict teensâ driving habits.
But by using proxy data, Facebook didnât just open the door for discriminatory ads; it also opened a potential legal loophole: they can deny that they were operating illegally, because they werenât filtering users by race, but only by interest in race-related content. Sure.
the endless barrage of cutesy copy and playful design features creates a false intimacy between us and our digital productsâa fake friendship built by tech companies to keep us happily tapping out messages and hitting âlike,â no matter what theyâre doing behind the curtain. Writer Jesse Barron calls this âcuteness applied in the service of power-concealmentâ:15 an effort, on the part of tech companies, to make you feel safe and comfortable using their products, while they quietly hold the upper hand.
According to Barron, tech products do this by employing âcaretaker speechââthe linguistics term used to describe the way we talk to children.
Digital products designed to gather as much information about you as they can, even if that data collection does little to improve your experience.
Because once our data is collectedâas messy and incorrect as it often isâit gets fed to a whole host of models and algorithms, each of them spitting out results that serve to make marginalized groups even more vulnerable, and tech titans even more powerful.
Cathy OâNeil claims that this reliance on historical data is a fundamental problem with many algorithmic systems: âBig data processes codify the past,â she writes. âThey do not invent the future.â 4 So if the past was biased (and it certainly was), then these systems will keep that bias aliveâeven as the public is led to believe that high-tech models remove human error from the equation. The only way to stop perpetuating the bias is to build a model that takes these historical facts into account, and adjusts to rectify them in the future. COMPAS doesnât.
as powerful as algorithms are, theyâre not inherently âcorrect.â Theyâre just a series of steps and rules, applied to a set of data, designed to reach an outcome. The questions we need to ask are, Who decided what that desired outcome was? Where did the data come from? How did they define âgoodâ or âfairâ results? And how might that definition leave people behind? Otherwise, itâs far too easy for teams to carry the biases of the past with them into their software, creating algorithms that, at best, make a product less effective for some usersâand, at worst, wreak havoc on their lives.
Remember, neural networks rely on having a variety of training data to learn how to identify images correctly. Thatâs the only way they get good at their jobs. Now consider what Googleâs Yonatan Zunger, chief social architect at the time, told AlcinĂ© after this incident: âWeâre also working on longer-term fixes around . . . image recognition itself (e.g., better recognition of dark-skinned faces),â 16 he wrote on Twitter. Wait a second. Why wasnât Googleâs image recognition feature as good at identifying dark-skinned faces as it was at identifying light-skinned faces when it launched? And why didnât anyone notice the problem before it launched? Well, because failing to design for black people isnât new. Itâs been happening in photo technology for decades.
Roth notes that this only started to change in the 1970sâbut not necessarily because Kodak was trying to improve its product for diverse audiences. Earl Kage, who managed Kodak Research Studios at the time, told her, âIt was never Black flesh that was addressed as a serious problem that I knew of at the time.â 19 Instead, Kodak decided it needed its film to better handle color variations because furniture retailers and chocolate makers had started complaining: they said differences between wood grains, and between milk-chocolate and dark-chocolate varieties, werenât rendering correctly. Improving the product for black audiences was just a by-product.
regardless of the makeup of the team behind an algorithmically powered product, people must be trained to think more carefully about the data theyâre working with, and the historical context of that data. Only then will they ask the right questionsâlike, âIs our training data representative of a range of skin tones?â and âDoes our product fail more often for certain kinds of images?ââand, critically, figure out how to adjust the system as a result.
Without those questions, itâs no surprise that the Google Photos algorithm didnât learn to identify dark-skinned faces very well: because at Google, just like at Kodak in the 1950s, ânormalâ still defaults to white.
Theyâre peopleâcustomers who deserve to have a product that works just as well for them as for anyone else.
Without feedbackâwithout people like AlcinĂ© taking it upon themselves to retag their photos manuallyâthe system wonât get better over time. It can actually get worse.
if a system like Word2vec is fed data that reflects historical biases, then those biases will be reflected in the resulting word embeddings. The problem is that very few people have been talking about thisâand meanwhile, because Google released Word2vec as an open-source technology, all kinds of companies are using it as the foundation for other products. These products include recommendation engines (the tools behind all those âyou might also like . . .â features on websites), document classification, and search enginesâall without considering the implications of relying on data that reflects historical biases and outdated norms to make future predictions.
In a paper titled âMan Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings,â they argue that because word embeddings frequently underpin a range of other machine-learning systems, they ânot only reflect such stereotypes but can also amplify themâ 29âeffectively bringing the bias of the original data set to new products and new data sets. So much for machines being neutral.
Once again, the problem isnât with the technology. Itâs with the assumptions that technologists so often make: that the data they have is neutral, and that anything at the edges can be written off. And once those assumptions are made, they wrap them up in a pretty, polished software package, making it even harder for everyone else to understand whatâs actually happening under the surface.
Even more worrisome, most of the people who create these products arenât considering the harm that their work could do to people who arenât like them. Itâs not because theyâre consciously biased, though, according to University of Utah computer science professor Suresh Venkatasubramanian. Theyâre just not thinking about itâbecause it has never occurred to them that itâs something to think about. âNo one really spends a lot of time thinking about privilege and status,â he told Motherboard. âIf you are the defaults you just assume you just are.â
If we want to build a society thatâs fairer, more just, and more inclusive than in the past, then blindly accepting past data as neutralâas an accurate, or desirable, model upon which to build the futureâwonât cut it. We need to demand instead that the tech industry take responsibility for the data it collects. We need it to be transparent about where that data comes from, which assumptions might be encoded in it, and whether it represents users equally. Otherwise, weâll only encounter more examples of products built on biased machine learning in the future.
every digital product bears the fingerprints of its creators. Their values are embedded in the ways the systems operate: in the basic functions of the software, in the features they prioritize (and the ones they donât), and in the kind of relationship they expect from you. And as weâve seen throughout this book, when those values reflect a narrow worldviewâone defined by privileged white men dead set on âdisruptionâ at all costsâthings fall apart for everyone else.
in 2010, just 5 percent of white internet users in the United States were on Twitter, while 13 percent of black internet users were. By 2011, that gap was even larger: a full 25 percent of black American internet users reported being on Twitter, compared with just 9 percent of white American internet users.17
But during all of these product improvements, Twitter built precious few features to prevent or stop the abuse that had become commonplace on the platform. For example, the ability to report a tweet as abusive didnât come until a full six years after the companyâs founding, in 2013âand then only after Caroline Criado-Perez, a British woman who had successfully led a campaign to get Jane Austen onto the ÂŁ10 note, was the target of an abuse campaign that generated fifty rape threats per hour.
âIf Twitter had people in the room whoâd been abused on the internetâmeaning not just straight, white malesâwhen they were creating the company, I can assure you the service would be different.â
Itâs not that Twitterâs founders had bad intentions. Itâs that they built a product centered on a specific vision: an open platform for short updates from anyone, about anything. And because abuse wasnât really on their radar, they didnât spend much time working out how to prevent itâor even take it seriously when it happened. It wasnât part of the vision.
According to many in the industry, Twitterâs failure to fix its abuse problem is part of the reason why itâs strugglingâand why no one wants to buy the company, despite Twitterâs best efforts to sell.
as long as Reddit maintains a âfree speechâ ideology that relies on unpaid moderators to function, it will continue to fall apartâand the victims will be those on the receiving end of harassment.
In the wake of the allegations, Facebook launched its own investigation, finding âno evidence of systemic bias.â But it didnât matter: in August, the Trending team was suddenly laid off, and a group of engineers took its place to monitor the performance of the Trending algorithm.45 Within three days, that algorithm was pushing fake news to the top of the feed: âBREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary,â the headline read. The story was fake, its description was riddled with typos, and the site it appeared on was anything but credible: EndingTheFed.com, run by a twenty-four-year-old Romanian man who copied and pasted stories from other conservative-leaning fake-news sites. Yet the story stayed at the top of the Trending charts for hours. Four different stories from EndingTheFed.com went on to make it into BuzzFeedâs list of most-shared fake-news articles during the election.46 This time, conservative pundits and politicians were silent.
Facebook did precisely what it had always intended with Trending: it made it machine-driven. The human phase of the operation just ended more quickly than expected. And when we look closer at Facebookâs history, we can see that this wasnât a surprising choice at all. Itâs right in line with the values the company has always held.
Thatâs why it was so easy for fake news to take hold on Facebook: combine the deeply held conviction that you can engineer your way out of anything with a culture focused on moving fast without worrying about the implications, and you donât just break things. You break public access to information. You break trust. You break people.
All kinds of problems plague digital products, from tiny design details to massively flawed features. But they share a common foundation: a tech culture thatâs built on white, male valuesâwhile insisting itâs brilliant enough to serve all of us.
The funny thing about meritocracy is that the concept comes not from any coherent political ideology or sociological research. It comes from satire. In 1958, sociologist Michael Young wrote a book lampooning the then-stratified British education system. In it, he depicts a dystopian future where IQ testing defines citizensâ educational options and, eventually, their entire livesâdividing the country into an elite ruling class of âmeritedâ people, and an underclass of those without merit. The public loved the word, but lost the point. Almost immediately, the term âmeritocracyâ was cropping up in a positive lightâparticularly in the United States (weâve never been great at detecting British sarcasm).
only 10 percent of the 187 Silicon Valley startups that received Series A funding in 2016 were woman-led, up a meager 2 percent from the year before.
[Josh: Was that number adjusted for number of startups started by women?]
In the fall of 2016, the Atlantic sent out its annual âpulse of the technology industryâ survey to influential executives, founders, and thinkersâand found that âmen were three times as likely as women to say Silicon Valley is a meritocracy.â
Tied up in this meritocracy myth is also the assumption that technical skills are the most difficult to learnâand that if people study something else, itâs because they couldnât hack programming. As a result, the system prizes technical abilitiesâand systematically devalues the people who bring the very skills to the table that could strengthen products, both ethically and commercially: people with the humanities and social science training needed to consider historical and cultural context, identify unconscious bias, and be more empathetic to the needs of users.
Within days, two more stories of sexual harassment and humiliation at Uber had been published, and countless others confirmed that the company culture was as they described it: aggressive, degrading, and chaotic.
In late January, Kalanick had gotten heat for joining President Trumpâs advisory council. Then, the company sent cars to JFK airport, where NYC taxi drivers were boycotting in protest of Trumpâs executive order barring legal immigrants of seven countries from entering the United States. Critics saw the move as profiteering, and started a campaign: #deleteuber.15 Within days, more than 200,000 people had done just that.16 Soon after, Kalanick resigned from Trumpâs council.
what allows a forty-year-old man to avoid âgrowing upâ for so long, even while commanding a company that was, as of this writing, last valuated at $68 billion? Itâs our friend the âmeritocracy,â of course.
A senior male colleague proposed an idea to block a driverâs payment if a customer complained. âI told them that it was unethical to block a driverâs payments without researching the complaint to make sure it was the driverâs fault,â she wrote, noting that many drivers live in countries where they do not own their cars and hand over their wages to another party. Blocking payments could leave them without income. âThere is no place for ethics in this business sweetheart. We are not a charity,â she recalls the senior manager responding. When she persisted, he covered the microphone on their conference call line, grabbed her hand, and told her to âstop being a whiny little bitch.â
comparing stats over time reveals that women are actually earning fewer degrees in computer science, not more. Originally, programming was often categorized as âwomenâs work,â lumped in with administrative skills like typing and dictation (in fact, during World War II, the word âcomputersâ was often applied not to machines, but to the women who used them to compute data). As more colleges started offering computer science degrees, in the 1960s, women flocked to the programs: 11 percent of computer science majors in 1967 were women. By 1984, that number had grown to 37 percent. Starting in 1985, that percentage fell every single yearâuntil, in 2007, it leveled out at the 18 percent figure we saw through 2014.
if people canât imagine themselves working in a field, then they wonât study it. And itâs hard to imagine yourself fitting into a profession where you canât see anyone who looks like you.
itâs critical that tech companies not just recruit diverse staff, but also work their asses off to keep them. Because unless tech can showcase all kinds of people thriving in its culture, women and underrepresented groups will continue to major in something else
blame the pipeline all you want, but diverse people wonât close their eyes and jump in until they know itâll be a safe place for them when they get to the other side. No matter how many black girls you send to code camp.
In a 2008 study that included thousands of women working in the private sector in science, engineering, and technology (SET, which, I should note, includes a range of fields broader than just web and software development), researchers found that more than half the women quit their jobs, âdriven out by hostile work environments and extreme job pressures.â 25 Another found that nearly one-third of women in SET positions felt stalled in their careersâand for black women, that number shot up to almost half.26
And so the cycle continues: tech sends out another round of press releases detailing meager increases in diversity and calling for more programs to teach middle schoolers to code, and another generation of women and people of color in tech pushes to be visible and valued in an industry that wants diversity numbers, but doesnât want to disrupt its culture to get or keep diverse people.
Study after study shows that diverse teams perform better.
In a 2014 report for Scientific American, Columbia professor Katherine W. Phillips examined a broad cross section of research related to diversity and organizational performance. And over and over, she found that the simple act of interacting in a diverse group improves performance, because it âforces group members to prepare better, to anticipate alternative viewpoints and to expect that reaching consensus will take effort.â
In another study, led by Phillips and researchers from Stanford and the University of Illinois at Urbana-Champaign, undergraduate students from the University of Illinois were asked to participate in a murder-mystery exercise. Each student was assigned to a group of three, with some groups composed of two white students and one nonwhite student, and some composed of three white students. Each group member was given both a common set of information and a set of unique clues that the other members did not have. Group members needed to share all the information they collectively possessed in order to solve the puzzle. But students in all-white groups were significantly less likely to do so, and therefore performed significantly worse in the exercise. The reason is that when we work only with those similar to us, we often âthink we all hold the same information and share the same perspective,â Phillips writes. âThis perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation.â
âFemale representation in top management leads to an increase of $42 million in firm value,â they wrote.31 In addition, they found that firms with a higher âinnovation intensity,â measured as the ratio of research and development expenses to assets, were more successful financially when top leadership included women.
Why, then, arenât things getting better, faster? How can the industry that put a powerful computer in my pocket and self-driving cars on the street not be able to figure out how to get more diverse candidates into its companies? Well, Iâll tell you the secret. Itâs because tech doesnât really want toâor at least, not as much as it wants something else: lack of oversight.
In all that research about the benefits of diversity, one finding sticks out: it can feel harder to work on a diverse team. âDealing with outsiders causes friction, which feels counterproductive,â write researchers David Rock, Heidi Grant, and Jacqui Grey.35 But experiments have shown that this type of friction is actually helpful, because it leads teams to push past easy answers and think through solutions more carefully. âIn fact, working on diverse teams produces better outcomes precisely because itâs harder,â they conclude. Thatâs a tough sell for tech companies, though. As soon as you invite in âoutsidersâ who question the status quoâpeople like Uberâs âAmy,â who ask whether the choices being made are ethicalâitâs hard to skate by without scrutiny anymore. As a result, maintaining the monoculture becomes more important than improving products.
CEO Stewart Butterfield reportedly even asks designers to close their eyes and imagine what a person might have experienced in their life before sitting down at their desk. âMaybe they were running late and sat in gridlock for an hour. Maybe they had an argument with their spouse. Maybe theyâre stressed out. The last thing they need is to struggle with a computer program that seems intent on making their day worse.â
The only real way to hold tech accountable, and to rid it of its worst excesses, is to demand that it become accessible to everyday peopleâboth in the way it designs its products, and in who can thrive in its offices. Because as long as tech is allowed to operate as a zero-sum gameâa place where anything goes, as long as it leads to a big IPO and an eventual multibillion-dollar saleâcompanies like Uber will exist.
Alienating and biased technology doesnât matter less during this time of political upheaval. It matters all the more.
In a 2009 Gallup poll, researchers found that respondents who said they knew a gay person were 40 percent more likely to think that same-sex relationships should be legal, and 64 percent more likely to think that gay marriage would not change society for the worse, than those who reported not knowing any gay people. The implication is clear: exposure to difference changes perspective, and increases tolerance.
Changing form fields wonât change laws. But the more our daily interactions and tasks happen in digital spaces, the more power those spaces hold over cultural norms. Every form field, every default setting, every push notification, affects people. Every detail can add to the culture we wantâcan make people a little safer, a little calmer, a little more hopeful.