Not many executives will step back and talk bluntly, in public, about what they do. But Microsoft President Brad Smith is no ordinary executive. He is also the company’s top lawyer. But he has been influential in both roles because he operates not just as a lawyer but also as a humanist. His new book, Tools and Weapons, is a surprisingly elegant survey of the daunting challenges modern technology poses to society.

It is a far cry from any policy white paper. With stories drawn from daily operations of a company that has been pulled inexorably into controversy after controversy, Smith explains how Microsoft has navigated vexing issues for which there are no clear answers. His central takeaways: tech companies must think harder and do more to respond to society’s concerns; government, too, has to move faster; and business and government have to find better ways of working together. Techonomy’s David Kirkpatrick interviewed Smith at a book event in Los Angeles in October 2019. Here is an edited transcript.

Ad

Kirkpatrick: You’re president of Microsoft and oversee all legal work. Why take time to write a book?

Smith: Technology has become so socially ubiquitous. It’s reshaping every aspect of not just our lives but our societies around the world. Technology is both a tool and a weapon. And when you think about the broad impact it’s having, lots of important information isn’t easily accessible to most people. Some people know a lot about tech but not about the policy issues. Some people know about policy but not about tech. But we also didn’t want this to be one more self-serving book. We really tried to step back and be candid and introspective.

Kirkpatrick: You tell many stories about how Microsoft had to respond to governmental action that was ill-considered, or illegal—one example being when Edward Snowden revealed the U.S. government was tapping into servers of major tech companies and stealing data.

Smith: We start the book talking about the Snowden disclosures because, in our view, it was one of the two inflection points this past decade in terms of the relationship between tech and governments around the world. We did decide to take a stand, and when you work for a company, you’re sort of ready to be sued by the government. You hope you won’t be. But it happens. To stand up and sue the government is very different.

Whether it’s standing up to our own government or responding to Russian government cyberattacks, we’ve got to decide what is worth fighting for. In the Snowden case and in these other cases there’s a single unifying principle. If you put customers first and you understand that in a cloud-based era, their data is in your data center and you’re its steward, not its owner, you realize you have to protect it, whether from surveillance or cyberattacks.

Ad

Kirkpatrick: But that also goes to one of the key points you make repeatedly in the book—that we need a new working relationship between three parties–government, civil society, and business, particularly the tech industry.

Smith: There are lots of smart people in government. Anybody who works in the tech sector who thinks that people in government aren’t smart enough to understand what we’re doing is fundamentally misguided. Just think about the world we live in. If you drove a car here this morning, it had safety standards because of government regulations. When you buy food, there are nutrition labels because of government rules. And if you use any kind of pharmaceutical product, it was approved by the Food and Drug Administration.

In every other area the question isn’t whether people in government are capable of regulating technology. Rather, it’s how do we ensure that people in government have the information and knowledge they need to regulate technology effectively. That’s what we need for digital technology. For those of us who create technology, it actually is the best path for a long-term, sustainable, profitable course, because it’s what will be needed to sustain the public’s trust.

We have to be more transparent, and we’ve got to think about how we can work together. If those of us who understand the technology best don’t propose ideas, then we shouldn’t complain if people who understand it less come up with something that won’t work.

Kirkpatrick: Surprisingly, you endorse regulation, even though most of your industry has resisted it. You applaud the new European privacy rules.

Smith: I gave a speech on Capitol Hill in D.C. in 2005 and called for a strong national privacy law. The rationale was straightforward. It had two parts. The first was that people are going to feel better about using technology if they trust it, and we shouldn’t just expect people to trust the companies that make it.

The second argument was, people ultimately are not going to want 50 states to have their own privacy laws, so you’ve got to get a federal act. But the second inflection point for tech this decade was last year with Cambridge Analytica. Overnight, that changed the conversation in Washington.

Kirkpatrick: You call it the tech equivalent of Three Mile Island.

Smith: For years, we’d been saying there will be a Three Mile Island for privacy. Three Mile Island was a nuclear power accident in Pennsylvania in 1979. One lesson was that the nuclear power industry had not engaged in a public discussion about the benefits and risks of nuclear power. So, when the accident happened, they were completely unequipped for it. And as a result, it would take 34 years before another nuclear power plant would be built in the United States.

In just a year and a half since Cambridge Analytica, the U.S. has adopted what will be understood to be a de facto national privacy law. It’s called the California Consumer Privacy Act. Starting in early 2020, every business in the country will have to ask: “Should I just implement these provisions for customers in California, or should I do it nationwide?” Immediately they’re going to realize it’s way too cumbersome and expensive to do it in a fragmented way. They’re going to say, “Let’s just apply the California law to the nation.”

Kirkpatrick: Nearly two decades ago, the federal government aggressively prosecuted Microsoft for antitrust violations. You prevailed. How did that experience shape the modern Microsoft?

Smith: We were the first graduates of this new school of hard knocks. We were forced to attend starting in the late 1990s with the big antitrust cases. The Department of Justice and 20 states, including California, sought to break up Microsoft. From that experience, I came away learning that you’ve got to start by just understanding why people are concerned about you. When we all see ourselves in the mirror in the morning, we say ‘I look pretty good.’ But you have to see what other people see in you, not what you see in yourself.

Kirkpatrick: Is that now partly why you come out differently from other giant tech companies on matters like privacy and artificial intelligence regulation?

Smith: It definitely plays a role. If we’re not understanding the problems technology is creating, not acknowledging them, and not working with other people to solve them, then we ultimately will fail. We’ll fail the people we create this technology for in the first place. So let’s get on with it and have these conversations, rather than just say, ‘Hey, we’ve got it right. We’re smarter. People don’t understand us. Please go away.’

Kirkpatrick: You talk about AI a lot in the book, and say government has to get involved.

Smith: We look at a lot of things through an historical lens. I think AI is going to reshape the economies of the world over the next three decades much the way the combustion engine did in the first half of the 20th Century. It will change everything. You could say that the two most important questions for all of us alive today are, number one–are we going to do what it takes to build a healthier planet so that it’s sustainable for the generations that follow? And two–how are we going to address that machines can make decisions that previously have only been made by people?

Hollywood has produced many successful movies that are basically: humanity creates machine, machine makes decision, machine destroys humanity. So this is a question on people’s minds in the general public. What are you all about to do to us? It calls on us to ask what are the ethical rules that will govern the decisions that artificial-intelligence-based machines make.

Kirkpatrick: There’s one sentence in your book that makes me sad it even has to be written: “Decision-making about war and peace needs to be reserved for humans.”

Smith: It does need to be written. Are we going to see weapons driven by machines deciding who to fire upon and who to kill? This is a classic issue that requires bringing different communities of people together. People in the human rights community have important things to say in this space, and so do the militaries of the world. These aren’t communities that often interact. So until you develop a common vocabulary, you can’t bring people together. We don’t want to wake up and find that machines started a war while we were sleeping.

Audience member: What do you look for when you’re filling a key role at Microsoft?

Smith: As we enter an AI-based world and you’re trying to empower computers to think like people, you actually need people who are more broadminded. We’re going to have more people from the liberal arts, albeit who are also steeped in technology to some degree. The single most important attribute is a desire to keep learning. If there’s one thing I’ve been struck by—whether with Bill Gates or Steve Ballmer or [Microsoft CEO] Satya Nadella or other tech leaders, or even Angela Merkel, it’s their amazing curiosity. Satya and I were with Chancellor Merkel in her office. Oh my gosh, she just sat down and started to quiz us. “How is this changing?” “How is Germany compared to Europe?” “How is Europe compared to the United States?” It was just question after question.

Audience member: What’s your advice for young people? How can they shine in an increasingly competitive environment?

Smith: It’s disturbing to see how access to the fundamental skills for the future—things like coding and computer science—are inequitably accessible today. Something we point out in one chapter is that the people taking coding courses or advanced placement high school computer science courses in America today are more male, more white, more urban, and more affluent than the nation as a whole. As technology skills increasingly translate into the best jobs, there’s a real danger that lack of access to these skills will exacerbate the other divides.

Audience member: How are we going to combat the adverse effects AI could have with things like facial recognition and biases, especially on black and Latino communities?

Smith: There has never been an issue in the quarter of a century I’ve been involved in the tech sector that has exploded politically like facial recognition. There is a real bias problem. It’s been well documented. Facial recognition has higher error rates for women and people of color. It’s likely to be a problem we’ll solve sooner rather than later, but it does require real work. We think the law should in effect say if you’re going to be in this business, you’ve got to make your service available for testing, to create a Consumer Reports of facial recognition, if you will. We’re also seeing the use of facial recognition to enable mass surveillance in some parts of the world.

Technologically we are on the cusp of making possible the kind of future George Orwell warned about 70 years ago in 1984. At Microsoft we’ve adopted policies designed to ensure our technology isn’t used that way. But we need a standardized approach across the industry and ultimately a recognition—I’ll say starting with the world’s democratic governments— that there must be clear limits on how facial recognition is used. That won’t stop the world’s authoritarian governments from using facial recognition for this purpose. But you perhaps start to create the basis for the dialogue that will be needed with other countries.

Audience member: What’s Microsoft doing to encourage smart tech regulation not just in the U.S. but across the world?

Smith: To change laws and regulations around the world today, your best place for success isn’t Washington. Laws imposing more obligations on tech companies are moving from places like New Zealand to Australia to the United Kingdom to France to Germany, and then it will move back to Washington. We’re seeing Europe in a leadership role. More than anything, this is a time when we should be thinking about how technology needs to serve democratic principles. You bring together the world’s democratic governments– there are 76 of them–and you pursue a deliberative strategy, and hope to bring the U.S. onboard. But when the U.S. is not prepared to get onboard, you just don’t worry about it. You create consensus and then you come back to Washington.

This story was originally published on Techonomy