So, I’ve Been Thinking

Recently, I had the chance to sit down for a drink with Grady Booch. For anyone who doesn’t know his name yet, he’s a technology pioneer, innovator, and all-around fascinating guy. He was a primary creator of the Unified Modeling Language, and his career has included everything from work at NASA (where he was literally the guy sitting in front of the big red self-destruct button during launches) to his current gig serving as Chief Scientist for Software Engineering at IBM Research. I can also tell you he makes a mean Hawaiian-twist margarita.

Grady’s been at the center of some of the greatest developments in coding and technology in the past few decades which makes him a deep well for serious topics. Our conversation touched a lot of areas, but I was most fascinated by his take on one topic that the technology sector wrestles with every day: the ethics of code.

I don’t think it’s contentious to say that digital innovations are driving changes in every industry and sector at a pace that we have never seen before. Some of those changes have led to large-scale, fundamental shifts in the business landscape, and some of them have led to smaller, more nuanced opportunities for new and existing businesses. All of those changes, however, have the potential to affect people in more than just the positive ways we have in mind when we code.

From the Luddite Rebellion of 1811 to the Lamplighter’s Union fight against the electric arc lamp in the 1890s, worries about automation displacing human jobs has existed for literally millennia. Those fears have been offset by the reality that change typically takes place slowly. Robots, for a more modern example, didn’t take all the manufacturing jobs overnight.  Instead, robotics has gradually reduced the need for “hands on” humans in the factory over the past several decades.  The jobs lost weren’t effortlessly absorbed into the economy, but the shift happened slowly enough that they could ultimately be absorbed.

Today, the fear of automation displacing jobs that can’t be absorbed is far more possible. Technology is progressing at a breakneck speed no matter where you turn, and no industry seems insulated from waves of innovation that use automation to do things more efficiently and effectively. What was once just a concern for manufacturing workers is now a concern for everyone whose work has any analytical or repetitive features. Want to build a car with no factory workers? Look no further than the Tesla plant. If you need an appendectomy on the other hand, you’ll still need a surgeon for their dexterity for the very near future. It’s not far-fetched to imagine a future, however, when an attendant might oversee an automated appendectomy like a Starbucks barista making digital selections on a Mastrena Espresso Machine.

The work we’re doing in tech carries incredible weight that we may often take too lightly.

Factories or hospitals, the work we’re doing in tech carries incredible weight that we may often take too lightly. We are actively finding ways to increase efficiency in every field—and I laude that. But as our enterprise level efficiencies move up the hockey stick, we need to start thinking about jobs the same way we balance environmental impacts of our work. The impacts of our work goes well beyond the innovations we create. If it hasn’t been asked before, it’s time to ask now: what ethical responsibilities do we have as we use code to transform the world?

Concern over the ethics of code opens the door to larger conversation about how Artificial Intelligence, along with the changing ways we work, is incubating a new economic model in the West.  It’s a model that requires different competencies and job types, but it also has the potential to empower humans like never before in our history.

The Implications of AI

Visions of AI have tantalized, inspired and terrified us for years. From Hal 9000 to Ex Machina, we portray AI as a conscious super intelligence or super villain. The reality is much more benign in the Hollywood sense and more insidious in its potential impact to our economy. The AI that’s real today is known as “Narrow AI” or “Applied AI” and it does very specific work like managing your calendar, finding a song that’s similar to others you like, giving you directions that route you around traffic and beating you in chess. It’s what many of us are working on every day, and, despite our fears of super-intelligence, Narrow AI is what is actually changing everything.

Dr. Rand Hindi, founder of, broke this down in detail in an article with a title that I love: “How My Research in AI Put My Dad Out of a Job.” Beyond the ethical jam, his point was that we shouldn’t worry about super-intelligence despite all the big names in tech who have come out with dire warnings. The reality is that super-intelligence could be a distant dream, and as Dr. Hindi puts it, we’re “missing the point that in the next decade, Narrow AI will already have destroyed our society if we don’t handle it correctly.” Though the warning is a bit hyperbolic, it’s true that when we focus on super-intelligence (also known as Strong AI for Artificial General Intelligence) we forget that Narrow AI’s inherently limited scope means that coders are working on discrete uses in every imaginable way. Narrow AI will replace or transform any job where information gathering and pattern recognition drive a volume business. That’s not just laborers. That’s accountants, traders, realtors, lawyers, software developers and on and on. The jobs can be low pay or high pay, but either way, AI can do it faster.

We’re already beginning to see how AI will become invaluable in these fields. For instance, one Canadian firm – Blue J Legal – is using AI to help accountants and tax lawyers predict how courts are likely to rule on a given set of facts and client circumstances years into the future. A Palo Alto-based legal startup, Casetext, is enabling lawyers to upload briefs and have AI do the case research work of hundreds of paralegals. In Japan, Fukoku Mutual, an insurance firm, is replacing 34 claims adjustors per instance with AI built from IBM’s Watson. In the US, we are particularly susceptible to Narrow AI affecting the industry. PwC found earlier this year that 38% of all US jobs are at a high risk for automation in the next 15 years. That’s just one of a number of studies that have reached the same conclusion: the next two decades will be a wild one for our economy if we don’t make planful changes soon.

Immunity to AI

That’s certainly not to say that every kind of job in the US is at risk. There is such a thing as “immunity to AI,” at least for the few couple decades. The simplest way to identify jobs that are insulated is to ask, “Does it require emotional intelligence or ‘non-patterned’ based decision making?” Ultimately, that leads to three broad categories of jobs.

The first category is jobs that require meaningful creative interactions with other people. Narrow AI can advise on the most successful closing strategies for a particular case, but it’s not capable of making a compelling closing argument in court. Even if we use an AI system to develop an argument based on the court’s preferences, to identify and incorporate all of the relevant case law and to select words and phrases that most people find persuasive – Narrow AI lacks a clear path to replace the human ability to deliver an argument to humans or to adapt mid-stride in reaction to others.

The same can be said for any number of professions. Marketing strategy and design will need human creativity and emotion. HR will need people to listen, empathize and make the right, context-based decisions.  Nurses will need to bring humanity to patient interactions and treatment. Teachers will need to bring expertise and learner-specific strategies to education. Even customer service will need humans in place to receive escalations that go beyond an AI’s ability to address.

The second category of jobs are those that won’t be replaced (yet) due to limitations of robotics. Our ability to code has progressed far faster than our ability to build machines capable of fine motor skills or dealing with unpredictable physical challenges. Repetitive physical tasks are one thing, but as a report from McKinsey & Company last year pointed out, even maid service in hotel goes beyond the capabilities of autonomous machines. For example, everyone throws towels and pillows in different places, and automated robots simply can’t deal with that degree of difference in a cost-effective way.  And though we are aggressively developing more advanced robots, it’s expensive and time-consuming to build them, meaning fields like on-site construction will largely have security for the foreseeable future even as the tools of the jobs change. None of that is to say that AI will not impact these first two categories of jobs. In fact, the most likely scenario is that many of these jobs will transform to work side by side with Narrow AI tools sooner than later.

The third category of AI insulated jobs are entrepreneurs. Be it a startup founder or a food truck operator who works alone, entrepreneurial roles require aspects of the of the first and second categories to various degrees.  Small business entrepreneurs and solopreneurs wear many hats on any given day—be it CEO, CMO, CFO, CIO, etc.  That diversity of work makes entrepreneurial work very difficult to automate.

Ethics of Code

So on one hand, we have jobs that are “safe from AI” while on the other we have jobs that are likely to be displaced. Where does that leave us as coders and technologists? If you listen to Grady’s Ted Talk on Superintelligence, you’ll hear him say, “The rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes?”

I don’t believe we should ignore the “I” in that question. The ethical dilemma we face in technology is one of our own creation, and that, to me, means it’s incumbent on the tech community to deliver the solution as well. Said simply, if you’re aware that the work you’re doing is going to displace jobs, you should be intentional in your effort to leverage technology to create new opportunities for the displaced.

Said simply, if you’re aware that the work you’re are doing is going to displace jobs, you should be
intentional in your effort to leverage technology to create new opportunities for the displaced.’s Dr. Rand Hindi proposes an interesting idea for social and governmental programs that would support an economic framework that will make widespread, AI-driven transformation sustainable. His argument is that the end result of displaced or altered jobs due to AI is a population that must be more educated to do the job of managing or interfacing with AI. That means we need to incentivize people to have ongoing, skills-based education in technology.

Dr. Hindi poses Universal Educational Income, a system in which people would receive a monthly salary as long as they are enrolled in some kind of educational program. There are any number of challenges that come to my mind when any universal income is proposed—from who funds that scope of spending to can it ever be enough to make a difference. It’s not an obviously viable policy but I can certainly appreciate the beauty of the idea: create a system that engineers people into the AI equation.  By incentivizing people to constantly learn, you have a more prepared workforce for a new economy. It’s a fascinating possible solution, and I believe the spirit of engineering our culture into an AI fueled economy is the right one. That said, I believe there are better ways to make that happen.

Engineering People In

First, I believe a simple premise is true: the faster we advance AI, the more we will drive demand for humans to manage and direct what AI makes possible. The reality is we are heading towards a huge supply of Narrow AI in the economy. Look at marketing for example, a field that is seeing a huge amount of investment in predictive AI technologies. Even as AI becomes acutely capable of optimizing ad spend and placement, the two select roles of the Marketing and Creative Directors actually grow in importance. The repetitive work is displaced, but demand for the creative thinking is actually on the rise. In other words, there has never been a better time to have the entrepreneurial spirit because technology and market forces are in place to support you.

Steve Case, CEO of Revolution LLC, gave a perfect example of this in a recent LinkedIn post. Two hundred years ago farming represented 90% of the American workforce. Now, that number is less than 2%. Rather than purely displacing jobs, technology made farmers more efficient and productive, and new jobs were created by the need to supply and support modern agriculture. In a modern context, it’s easy to envision new entrepreneurial roles that wouldn’t be possible without AI—ones made possible by bundling creativity and dexterity with deep analytical insights.

What jobs will best augment or enhance what AI can do?  How can the tech industry be as instrumental in creating jobs as we are in displacing them?  These are question that everyone driving tech automation should be thinking about.  I’m pushing (at GoDaddy) to drive a platform to empower entrepreneurs to make their ideas real with the help of machine learning tools and predictive analytics to guide their decision making.  I think that’s one important way to help make our economy immune to AI, but I’d like to challenge the industry to think of a hundred more solutions—and then get working to test them.

For entrepreneurial options, our goal should be to deploy Narrow AI in a way that encourages more and more people to experiment with the self-driven ventures. If we engineer tools that reduce the barriers to access through elegantly simple systems and widespread availability, then the technology we build for efficiency can help us empower economic participants at the same time. There’s no doubt that we can be the drivers of a new economy with new companies and new careers – but we have to be intentional about that role.

Finally, I think the tech industry needs to be a louder voice in the real risks to our economy that Narrow AI is creating right now. Grady Booch and other luminaries shouldn’t be left to carry the entire load. More of us need to clearly articulate why people should be excited about the promise of AI and its real economic dangers. We aren’t building Skynet, but we might be building something just as dangerous for billions of people if we don’t purposefully create new opportunities as the old economy passes.

Where I Land

Larry Niven once said, “That’s the thing about people who think they hate computers. What they really hate is lousy programmers.” That’s a timely and true quip in its own right, but it should also remind us that we are the one’s behind the code. We have an ethical opportunity to consider and attempt to address what will happen because of our code.

As we create new applications for AI that make it possible for seemingly once magical automation to happen, we should devote some of our time and energy to figuring out how to make more people magicians. Let’s help more people become builders of the new economy by putting the power of what we build in their hands as quickly and simply as possible. That’s how we’ll begin to see the new jobs and businesses emerge that will drive a new economy forward. No matter what, we need to bring our own humanity to bear every time we type a line of code. If we can do that, there will certainly be no reason to fear Skynet – but there will also be a lot to be excited about thanks to the future of AI.

Your Voice Wanted

One of the best ways for me to mature my thoughts on the ethics of code is to hear from you.  Please share your thoughts below—I tend to be on my blog in the evenings, so look for my responses then.