Mercury Logo

Hello, we’re Mercury. Mercury offers banking* for startups — at any size or stage. Founders can access banking, credit cards, treasury, venture debt, and more, and manage their businesses with confidence. Launched in 2019, Mercury is trusted by more than 100,000 startups.

*Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust, Members FDIC.

Smart computers, smarter humans
Smart computers, smarter humans

Stories of tech

Smart computers, smarter humans

If computers learn to code, what’s left for human coders to do?

Written by Andrew Leonard

Illustrations by YONIL

In January 2023, Zeke Sikelianos, a software developer in Berkeley, California, was eating a meal in a restaurant when the playlist switched to a string of 1980s New Wave hits. “Blondie, Devo, that sort of thing,” he recalls.

Sikelianos, who works for Replicate, a startup that builds open-source tools designed to work with machine learning programs, couldn’t help but think about ChatGPT, the large language model (LLM) chatbot released by OpenAI a month earlier.

“I wondered if you could write a short prompt to ChatGPT that — given a few sample artists — would produce a list of songs, search for them on YouTube, and download them,” says Sikelianos. “So I wrote one paragraph for ChatGPT and it produced an entire Python script that generated a list of twenty song titles, downloaded them all from YouTube as MP3s, and put them on my computer.”

“It worked flawlessly on the first try,” he recalls.

Sikelianos has been a working programmer for more than 20 years. He’s used PHP and MySQL to build online shopping carts and worked for five years as a staff engineer at GitHub, the open-source software code repository. Currently, he focuses on broadening access to tools like ChatGPT and the image generator Stable Diffusion by creating and maintaining open-source tools built with Python and Javascript. He’s convinced that ChatGPT and GitHub CoPilot, an “AI assistant” released a year earlier which is built on top of the same OpenAI code, “are game changers. It is at once scary and exciting.”

Exciting, because, as Seattle-based software engineer and founder Colin Megill puts it, “software development with CoPilot is the difference between hiking up Mount Everest on foot or going up it with jump jets on your back.”

But also scary, because: if computers learn to code, what’s left for human coders to do?

"The most important thing I would recommend people keep in mind is not to focus on the current capabilities of AI. Instead, think about what next year's AI models will be able to do, and the year following. Then, consider the landscape in 10 or 20 years.”

The impact of automation on the labor market has been making workers nervous since at least as far back as the early 19th century, when the Luddites went on their warpath against textile machinery. But the latest plot twist in this long-running drama is the emergence of anxiety on the part of the creators of automation themselves. Ever since the release of GitHub CoPilot, there have been murmurs of anxiety about AI’s threat to eliminate software developer jobs surfacing in online discussion forums where programmers gather.

“It feels like I'm running against a clock until the career I am working very hard for will automate itself away,” wrote one programmer on Hacker News in February 2022. “I don't think I'm going to be fired tomorrow but with the astounding rapidity of advancements here,” wrote another on Reddit at the end of 2022, “it seems like the days are numbered for software engineering as a field — at least for this quantity of software engineers.” A survey of 500 web developers conducted by Bootstrap just a few months after GitHub CoPilot’s release found that 61 percent of the respondents “agreed that AI is likely to cause widespread unemployment in the web development industry.”

~

In late January, the same week I talked to Sikelianos, Microsoft made two announcements that seemed to confirm these worries. The company said it was planning to cut costs by laying off 10,000 employees. Simultaneously, it confirmed a multi-billion dollar investment in OpenAI. Soon, ChatGPT was integrated into Microsoft’s Bing search engine as well as in a “Premium” version of Microsoft’s video conferencing software Teams, with the promise that ChatGPT will take bulleted notes on live conversations as they happen.

At the World Economic Forum in Davos, Switzerland, Microsoft CEO Satya Nadella said that “every product of Microsoft” will eventually have some aspect of OpenAI’s capabilities “to completely transform the product.”

While Microsoft’s layoffs appear to cover a wide array of the company’s operations — not just engineers — the writing on the wall seemed like it couldn’t be any clearer for many. Human labor: out. AI labor: in.

As Marc Andreessen famously declared in 2011, software is eating the world. Now, software also seems to be eating itself.

~

In late January, Jeff Clune, a computer scientist who recently did a stint at OpenAI, made a prediction at a conference: there is a 30 percent chance that AI will be capable of handling “50 percent of economically-valuable work” by the year 2030. When I followed up with him via email to ask him if that work included computer science jobs, Clune said yes.

“Software developers have lots of reasons to be worried about their job security,” says Clune. “Many of the tasks they currently spend lots of time on are being automated. The pace at which that occurs will accelerate.”

LLMs are a subset of AI that work their sorcery by digesting huge databases of information that already exist in some digitally accessible form. GitHub CoPilot scrapes data from all the open-source code hosted on GitHub. ChatGPT was trained on roughly 500 gigabytes of text and code available on the internet (including repositories of information like the very Reddit discussion forums where programmers discuss their anxieties about ChatGPT). Therefore, in theory, any code that humans have already shared publicly will be available for AI assistants to absorb. As Sikelianos notes, this is literally the definition of software eating software, “and then regurgitating it.”

And yet, it's probably worth noting, at this point, that ChatGPT is hardly infallible. In fact, in January, Stack Overflow, a hugely popular clearinghouse for software programming assistance, had to ban the posting of answers generated by ChatGPT because, according to site monitors, “the average rate of getting correct answers from ChatGPT is too low.”

At OpenAI, Clune led a research experiment in which an LLM “watched” 70,000 hours of Minecraft videos that had been uploaded to YouTube. By the end, the model was able to craft a “diamond hammer” — a task that would typically require hundreds of human mouse clicks — faster than a human could.

Clune’s choice of Minecraft as an AI playground actually reveals the limits of AI, says Colin Megill, the founder of Pol.is, a non-profit that uses machine learning to gather, analyze, and understand what large groups of people think

A Minecraft diamond hammer “is produced in a closed system where everything is scriptable.” And that is exactly the kind of constrained scenario where AI performs best.

If you are coding what Megill calls “an optimizable function,” a program where there is a “spec that says here’s the input and here’s what we need the output to be,” then you are doing something that fits right into the LLM wheelhouse.

"The issue is that the tasks that are left over after automating all the things that can be automated are the ones that you can’t automate because they’re so fucking hard.”

A good example, incidentally, of an optimizable function might be the music program that Sikelianos asked ChatGPT to devise.

“I think the people whose job was to effectively write functions are in a lot of trouble,” says Megill. “But that’s also a lot of rote work and it's not a creative job. It’s important — optimizable I/O is definitely something that we spend an enormous amount of time on in very large companies — but I think what it means is that the programmer of the future will not just be a programmer but a manager of specs. Being able to anticipate what the machine does when you give it a spec will be very important.”

~

Mostly, those experienced with programming echo the same thing: new tools are always being introduced to make programming easier and faster than before. Programmers once had to tell computers what to do with assembly language (ones and zeros); the entire history of the profession since has been to develop ever-more advanced programming languages that abstract the gnarly hard stuff away and make it easier to write code.

Take compilers, version control systems, integrated development environments. One can even get more basic: spellcheck programs, Word macros, and the very notion of a programmable spreadsheet.

CoPilot and ChatGPT, says Adam Jacob, a software developer and the CEO of System Initiative, a startup that works on “infrastructure automation,” “are really just advanced autocomplete tools.” A time-saver for busy workers, but not existential threats.

This is not to say that the future won’t be different. He says the problems left to solve once computers take care of all the easy “boilerplate” stuff will be, by definition, difficult. If you’re a programmer who makes a living just doing the simple stuff, it might be time to level up.

The larger issue may not be about which jobs are replaced but about the nature of the jobs themselves. If AI assistants accelerate the speed with which grunt work is accomplished, then what’s left over will be harder. Instead of just connecting the dots, developers will be forced to be more creative.

“CoPilot requires a change in the way you approach the task of writing the computer program,” says Sikelianos. “Instead of just code, code, code, you start with intent.” Even what Sikelianos calls “the art of the prompt” — phrasing your request to an AI assistant in a way that will get you a correct answer to a problem — can be tricky. To craft the prompt which built his music program, he says, “I had to know the right terminology to use. I had to know the word ‘function.’ I had to know that I wanted to write in Python, and that I needed a Python coding execution environment, in which I could take that script and do something with it. And I had to know that youtube-dl is an open-source tool that can be used to download stuff from YouTube directly.”

Even then, Clune says that what we have around us now is not enough to predict off of. He said we should be looking further ahead: “The most important thing I would recommend people keep in mind is not to focus on the current capabilities of the best AI,” he says. “Instead, think about what next year's AI models will be able to do, and the year following, and then consider the landscape in 10 or 20 years.”

“We can think of an AI version of Moore’s Law, where AI capabilities double every two years. If that very rough guess at the doubling time is true, within 20 years we will have AI models over 1,000 times more capable than ChatGPT, Codex, Dall-E 2. It’s hard to predict what the labor market will look like given that, but the one guarantee I can make is that it will be radically different from what we have today.”

~

But Clune and Jacob both argue that if you lower the cost of something, that can actually increase its demand — an economic observation known as Jevons Paradox. A common example is how energy efficiency can lead to more energy use. Therefore the possibility that AI assistants could have a negative net impact on the overall software developer job market seems remote to Jacob.

“So far,” says Jacob, “advancements in automation in tech have never caused us to build less tech.” But as the MIT economist Daren Acemoglu has been arguing for more than a decade, while we certainly can’t say that automation leads to higher levels of unemployment, there is certainly evidence that automation has had a significant impact on the distribution of wealth within industry sectors.

Never miss another untold story.

Never miss another untold story.

Subscribe for stories, giveaways, event invites, & more.

In other words, in industries affected by automation, highly skilled and highly educated workers end up with a larger share of available wages than low-skilled and less educated labor. Automation might drive income inequality.

I asked Acemoglu whether something similar could play out in software. ”There are a lot of people working in the security part of IT and I think their jobs can be automated,” he says.

Like Megill, Acemoglu foresees challenges for programmers who currently make a living writing easily-optimizable functions with clearly defined specifications.

It raises the question: will those programmers have the wherewithal to start tackling problems whose solutions are by definition not easy to specify?

Viewed from this perspective, the whole premise of labor-saving automation is turned on its head: as computers get smarter, human work gets harder? But this is not a new observation. John Allspaw, former CTO at Etsy and principal of Adaptive Capacity Labs, a consultancy specializing in incident analysis and resilience engineering, told me that forty years ago, the researcher Lisanne Bainbridge published a now “canonical” paper titled “The Ironies of Automation.

In her paper, Bainbridge makes a convincing case that automation actually raises the stakes for humans.

Bainbridge observed, says Allspaw, that “the more advanced a control system is, the more crucial the contribution of the human operator.” This is because automation doesn’t remove work completely; it just changes the nature of the work that remains to be done. “The tasks that are left over after automating all the things that can be automated are the ones that you can’t automate because they’re so fucking hard.”

The AI of the future will have to be really smart to solve those problems.

More Like This