An AI Pattern Language

M.C. Elish and Tim Hwang
5.7" x 7.9", black & white, 52 pgs.

How are practitioners grappling with the social impacts of AI systems?

In an AI Pattern Language, we present a taxonomy of social challenges that emerged from interviews with a range of practitioners working in the intelligent systems and AI industry. In the book, we describe these challenges and articulate an array of patterns that practitioners have developed in response. You can find a preview of the patterns on this page, and you'll find more context, information, and analysis in the full text.

The inspirational frame (and title) for this project has been the unique collection of architectural theory by Christopher Alexander's A Pattern Language (1977). For Alexander, the central problem is the built environment. While our goal here is not as grand as the city planner, we took inspiration from the values of equity and mutual responsibility, as well as the accessible form, found in A Pattern Language. Like Alexander's patterns, our document attempts to develop a common language of problems and potential solutions that appear in different contexts and at different scales of intervention.

While we believe the views we present are significant and widely held, these patterns are neither comprehensive nor proscriptive. Rather, this document is an experiment in cataloguing and catalyzing. AI is not out of our control, and an AI Pattern Language calls attention to the ways in which humans make choices about the development and deployment of technology. This text was created in the spirit not of an answer, but of a question: how can we design the technological future in which we want to live?

Challenge: Assuring Users Perceive Good Intentions, p. 18

Pattern 1

Show the Man Behind the Curtain

“We want our users to know that our products are powered by people,” one product manager at a music recommendation company explained. This desire reflects one perspective on how to achieve trust between human and computer: make the role of humans visible in an interaction which can sometimes seem inhuman, even if it’s personalized. Speaking about a discovery feature, this product manager explained how through the design process they realized they had to make the product “not feel creepy,” it had to “feel human, powered by people.” He explained, “We've made it like a gift, a gift for you each week. Sometimes we get it right, but like a human, the product can have good weeks and bad weeks.”

Make the role of humans visible in an interaction which can sometimes seem inhuman, even if it’s personalized.
Challenge: Assuring Users Perceive Good Intentions, p. 18

Pattern 2

Open Up the Black Box

Additionally, the idea of providing transparency about how a product worked was also used as a design strategy. A lead designer at Cortana, Microsoft’s personal assistant application, explained how the idea of “Cortana’s notebook,” was central to how they conceived of the relationship between a user and her Cortana. The notebook could be accessed by the user at any time, and “as the place where all of the inferences that we use are stored and visible to users, [the user] can go and adjust them, turn them off, or delete them.” Providing a clear and easy access point to how Cortana is building intelligence provides the user with the sense that the relationship is evolving as they, the user, want. “We found is there is a bunch of inferences we can determine that when quantified to users are just creepy,” the designer explained. He continued:

“For example, we found a lot of anxiety around knowing your [the user’s] home and work address. So as much as it only took us a short period of time to use GPS and other things to begin to locate where you were, actually using that data without talking to you was super creepy to our users. They thought it was neat but it was scary. We really looked at how we built this path of how do we introduce ourselves, how do we focus on setting you up to succeed with the tool, and how do we actually grow that success over a period of time so that relationship becomes more indispensable? Something we hadn't realized was that there was a lot more people wanting to be able to tune and control.”

The founder of a new machine learning start-up also explained how the principle of transparency could be used as a means of establishing trust:

“We were motivated by things like GitHub, open-source software, and Wikipedia, where you really see high-quality content emerge from an open refinement process where people can contribute. ... Similarly, people now trust Wikipedia for getting information, but most people do not dig behind the scenes or engage in fact checking or citation checking or anything like that. But nonetheless, people trust Wikipedia and trust open-source software because they know the process works like that and that people are behind the scenes, doing those things. So that’s going to be our approach to how we get the same kind of level of trust.”
Providing a clear and easy access point to how Cortana is building intelligence provides the user with the sense that the relationship is evolving as they, the user, want.
Challenge: Assuring Users Perceive Good Intentions, p. 18

Pattern 3

Demonstrate Fair and Equal Treatment

The idea of cultivating trust at times can look like grappling with the fairness of a system. The founder of an intelligent stock portfolio startup explained that while any kind of systematized and automated investing introduces new kinds of potential biases, this does not obviate the attention that must be paid when developing new systems, “You have fiduciary responsibility to make sure that it’s [the execution of trades] is fair across accounts. In our current company, there are only two of us, and we write the code ourselves, and each of us reviews the algorithms where things like that matter.” For instance, if all of a company’s clients are employing the same strategy, such as, buy shares of IBM, the order in which the algorithm processes the list of clients is significant. It might seem straight forward to execute the trades down an alphabetical list, but this could mean that over time, the lowest last name would have a slight advantage over the others. This founder explained,

“You have to think, ‘Is this actually fair? Is there any ordering bias? Is there any way that this client is going to be favored, executing and submitting the trades, more than others...?’ So there are plenty of details like that that you need to have in mind when writing the algorithms. ... A common way to solve the problem in the algorithm is to use a random number and order clients like that. It’s like pulling numbers from a hat, that there’s no inherent bias in the system.”

Maintaining a fiduciary responsibility was an obligation that he wanted to explain to customers that he took very seriously.

While any kind of systematized and automated investing introduces new kinds of potential biases, this does not obviate the attention that must be paid when developing new systems.
Challenge: Protecting Privacy, p. 22

Pattern 4

Data Security Is the Foundation

One sales manager of a predictive analytics firm explained that explicitly addressing security and privacy was an important part of the pitch:

“We use it as a sales tactic to say that the marketplace is concerned about this, and we've got a very strong response to it, and we should be the trusted advisor to ease those concerns so that they can move forward.”

For this manager, and most of the others with whom we spoke, data security and data privacy are intertwined concepts. Data security is what allows data privacy. And privacy, when we spoke with interviewees and asked about the social aspects of product adoption, data privacy—for good and bad—was nearly always the first issue to be raised. There was a widespread sense that data privacy needed to be addressed in systems design because it was a primary concern for users and the general public.

When we spoke with interviewees and asked about the social aspects of product adoption, data privacy—for good and bad—was nearly always the first issue to be raised.
Challenge: Protecting Privacy, p. 22

Pattern 5

Establish a Catch and Release Data Pattern

The founder of a Miami facial recognition software company we interviewed explained that he has had to deal with privacy issues on both a practical and theoretical level:

“We have people who come to us all the time and say, ‘Oh, I love what you guys are doing, but I'm also so scared, like is this the end of my privacy? Will people know what I'm doing? I feel like Minority Report, right, like I walk into a mall and all the ads change just for me.’ And we actually try to assure them that no one, at least none of our customers that are coming to us, are interested at all in creeping anybody out. There is very little money to be made in creeping people out. In fact, there’s only money to be lost.”

This founder explained how the company has chosen to mitigate privacy concerns:

“So number one, we have a multi-standard environment, for example Walmart and Target can't share information with each other. I mean, maybe they can on their side, if that’s what they want to do. But from our perspective, their data is completely separate. Number two, we take in a video stream, we identify certain points on the face, and from those points we know if you’re male or female or your age, and then we take that image and we throw it away on device. For instance, for a camera at a Walmart in, say, Arkansas, the image of your face comes into that camera on the device (we just use small Android-like hard drives on these devices). We process the video stream either in the device or in the store, depending on the configuration. From that, we get the demographic information out. That goes to a file or a porting API. Just the ‘Female, 42, at this location’ goes up, but the image itself gets deleted and kind of thrown away right there on the spot. Actually, when I say, ‘delete,’ it actually just is passed through and it’s just void and deleted, and it’s never even saved to begin with. It’s been processed real-time. So that’s how we, from a design perspective, keep those things [privacy violations] from happening.”

“There is very little money to be made in creeping people out. In fact, there’s only money to be lost.”
Challenge: Protecting Privacy, p. 22

Pattern 6

Tailor Expectations to Context

“Thinking globally, internationally, is a necessity when designing systems,” an autonomous vehicles researcher pointed out to us. Sitting in Silicon Valley, the car company she worked for was headquartered on the other side of the world and the car would eventually be shipped to dozens of countries. The reality of mass-produced international products, which operate in specific, local contexts complicates the conceptualizing of design problems. One area where this was reflected to us was in the cultural specificity of the notion of privacy. “We do a lot of research and we do a lot of thinking about this topic,” the founder of a facial recognition software company explained, “It has been interesting to us that in emerging markets, privacy is not always as important to people as it is in the United States.” Moreover, the context of use changes how privacy needs to be taken into account. Because privacy can mean different things about different kinds of information, and between different individuals or institutions, privacy needs to be considered as a relation, not a fixed attribute.

Privacy needs to be considered as a relation, not a fixed attribute.
Challenge: Protecting Privacy, p. 22

Pattern 7

Be Patient

Many of those with whom we spoke expressed a kind of "wait and see" attitutude with regard to privacy concerns. “There’s no absolute notion of privacy,” explained an investor in financial technologies based in San Francisco. “And my bet,” he continued,

“is that over time, I mean over the long span of time, we will just all be much more willing to give away information to sets of services in a way that will probably make our relationship to the government or to companies unrecognizable, compared to today.”
Challenge: Protecting Privacy, p. 22

Pattern 8

Ignore the Anxiety Around Privacy: It’s a Red Herring

In fact, in the fundamental relativity of privacy, a number of interviewees held the view that focusing on data privacy is misguided. The founder of a leading machine learning company based in Silicon Valley explained that privacy is simply “a historical construct” and is mistaken as important today. He used the example of reactions to Gmail to illustrate his point:

“When Gmail launched, ‘privacy’ was a big issue. The Google engine could read all your mail! But it is a machine reading it, not a human. And it’s being read for purely the purpose of ad revenue. And now no one cares.”

He found that focusing on the notion of privacy held back innovation. He concluded, “Yes, there’s a sense of impatience on my part. We will adapt to new technology and we will evolve.”

Another venture capitalist based in New York had reached a similar conclusion. He emphasized that the current focus on privacy holds back not just innovation, but more importantly, the means to build a better society in an age of digital technology:

“I believe all the advocacy for privacy that’s currently taking place is a terrible, horrible, bad idea. I think that the only logical construction of society going forward is one of transparency and post-privacy. The reason I believe that is because I believe that democracy can only work in an environment of mutual trust, where we figure out how to construct a government by and for the people. I know that we distrust our government in the US, but to double down on that distrust in the way Apple is doing and others are doing right now, I think will lead us to the very government we fear. It will lead us to a totalitarian government, and it will lead us to computational devices that are locked down. And do we want a world of locked-down computing devices, or do we want a world of the free flow of information, even if that free flow means that a lot more is known about each individual?”
“Yes, there’s a sense of impatience on my part. We will adapt to new technology and we will evolve.”
Challenge: Establishing Successful and Long-term Adoption, p. 28

Pattern 9

Always Ask: Who is Being Made the Hero?

One human factors engineer working at logistics software startup described how he thinks that “everybody wants to be the hero of their own story.” The central question for him and his team, as he saw it, is “how do you actually under the hood have autonomy going on, and yet make the user feel like they're in control? If you have an autonomous, or semi-autonomous technology coming in, and they're not the hero, they are going to resist that.” He and his team have iteratively designed their software alongside a group of farmers who would potentially use the software. This kind of user-centered design generates a product experience that effectively takes into account the people and contexts in which a product is be used.

Similarly, a leading human-computer interaction designer explained how he framed the problem of integrating new intelligent systems into existing workflows by beginning with his own experience.

“At the beginning of my career, I worked in an IT department, and we were all about making people’s jobs simpler. But I realized that was a bad idea. No one wants that, no one wants their job to be simpler because that means their job isn't necessary. But everyone wants to be better at their job, because being better is about adding value to the company.”

For this designer, framing the assistance that technology brings as enabling a worker more—rather than as making a job simpler—was a means to build better intelligent systems that people would be glad to work with. Others we interviewed also expressed the sentiment that intelligent systems could “make people more effective in an increasingly difficult world,” as one engineer at a computer vision company put it.

“How do you actually under the hood have autonomy going on, and yet make the user feel like they're in control?”
Challenge: Establishing Successful and Long-term Adoption, p. 28

Pattern 10

Plan for the Role of Human Resources

Still, others we spoke with who were closer to the actual use of these systems confronted unanticipated challenges no matter how the systems were framed as useful. One analyst working for the city of San Francisco’s expressed her frustration that “while everyone is talking about big data, we need, we use, little data. It’s not just what you can get, it’s what you can understand and get other people to understand.”

An unexpected challenge raised by several interviewees was correctly preparing for the process of introducing a new technology into an organization. Many initially underestimated the human resources that would be required to facilitate new technical resources. A product sales person working within IBM explained a scenario he had seen: “There’s a visionary within the organization, and they're like, ‘Absolutely. This makes sense. This is the direction we need to go.’ They buy the software, and then reality sets in.” He explained that sometimes a company doesn't realize it will need people and internal skill to integrate the system. He continued,

“That’s where I see more of the bumps and the hurdles. It’s when that vision doesn't marry up with the reality. I think some folks fall into the category of thinking it’s like, ‘I buy a solution. It’s like the iPhone, and within a minute I could start making phone calls,’ when in reality there is a methodology and there’s a process that takes time, expertise, and it’s iterative. There’s an immediacy that everyone wants in this day and age, where oftentimes the implementation can be more time consuming, resource involved, and more challenging than initially anticipated.”

“While everyone is talking about big data, we need, we use, little data. It’s not just what you can get, it’s what you can understand and get other people to understand.”
Challenge: Demonstrating Accuracy and Reliability, p. 31

Pattern 11

Explain the Conditions of Accuracy

“The tensions with clients are typically around accuracy,” explained an engineer at an intelligent sentiment analysis company. Some clients and users want to understand how this kind of analysis is better than what they may be currently using. The engineer continued,

“People just always ask, ‘I want to know how accurate it is. Give me the accuracy.’ Well, in our case, ‘accuracy’ is actually a really poor metric to know how good this API is because you have 111 categories. We could say, ‘Great, we've got 45% accuracy.’ They're like, ‘Forty-five percent? That’s terrible. I’m looking for something close to 100.’ But we’re like, ‘No wait, okay, hold on and think about it for a second. There are 111 categories. Many of them are very similar, like fitness and dieting for instance, very similar.’ They’re like, ‘Look, give me a number that I can say is accuracy.’ We usually come up with proxy numbers where we can give you the best sense that corresponds to what you believe is accuracy.”
Some clients and users want to understand how this kind of analysis is better than what they may be currently using.
Challenge: Demonstrating Accuracy and Reliability, p. 31

Pattern 12

Prove Success by Showing Failure

Alternately, a few of those we interviewed used lapses in accuracy, as much as demonstrations of accuracy, as a way to establish trust (Pattern 2). A product manager for a recommendation system said that she and her team would ask themselves,

“How do we also communicate that the system is fallible? How do we let you [the user] know when we think we’re right, when we think we’re close, when we think we’re wrong, and how do we ultimately get to a place where the user actually decides? I [the user] decide that yes, that restaurant was a great suggestion or no, it’s terrible, I absolutely don't want to eat Thai tonight.”
Challenge: Demonstrating Accuracy and Reliability, p. 31

Pattern 13

Establish a Baseline

Another way those we interviewed thought about accuracy was to think about accuracy not as an absolute, but in comparison to humans. A recent grad who had founded a machine learning start-up told us,

“The way that I think about it [accuracy] is against the human gold standard. The human gold standard is shockingly good and shockingly bad in different cases. This is actually a problem that I faced very tangibly in my past work in that we have a lot of assumptions that the human gold standard is very good but it turns out it’s really bad. For example, the ‘how old are you’ type software, if you take an average person and show them a picture of a face and ask them how old is the person in this picture, it’s really difficult. They are on average extraordinarily bad at it, though they may still be good at recognizing the face and things like that. But it becomes this interesting philosophical debate which is: is the goal to be most empirically correct or is it to match what a person thinks?”

Unsurprisingly, all of the people we interviewed expressed the opinion that their software programs are generally better and more accurate than humans in the same context. As one venture capitalist in San Francisco put it,

“Machines are not biased. They don't think all Asians look alike, you know? They don't have these preconceptions that some of us have. we’re like, ‘Who cares whether it’s an Indian or Japanese face?’ For them, it’s just the data of the face, and so once they got better, they’ve lost all of those biases. They don’t wake up and they’re grumpy, or half bleary-eyed from a hangover from the night before. They... There could be biases in the training data, absolutely. But my point is once you’ve cracked the nut, those are things that you can study and correct, et cetera, and you can detect them much more precisely than you could detect them in a human.”

As the founder of a very successful machine learning company put it, “We are more flawed than our algorithms.”

Is the goal to be most empirically correct or is it to match what a person thinks?