Will human software developers be replaced by robots? Given recent news about ChatGPT and Copilot, it seemed a response from the trenches might be warranted. Financial Times offered a decent summary of some recent related news in their 2023-01-27 Op-Ed “Coding is still a good career bet”.
For some few reasons (given below), I feel a career in software development still has legs. There are still some really hard problems to solve in the field, and they aren’t going away any time soon.
Background
Up front, I must confess that a good portion of my opinions were formed rather early in my software development career. I was especially fond of and influenced by some of the early industry luminaries, including the likes of Terry Winograd.
So, my opinions about the appropriate use of computers was channeled in a specific direction: to solve human problems and support human needs. Of course like all tools, they can be used for good or ill. Fears of ill intentioned use by bad actors are entirely appropriate. Fears that computers might somehow become “sentient” and rise against us are simply ill-informed fantasy.
Opinion
My general take on so called “artificial intelligence” AI is that it’s neither, neither artificial, nor intelligent. Calling intelligence artificial insults intelligence. The only artifice therein may lie in the marketers of such products, as they try to boost the sales of them to those not properly informed about their history and prior art. Likewise, so called “machine learning” ML. Simply put, machines don’t learn.
Mind you, we do have tools that can be “trained” with vast amounts of data to detect and recognize patterns really well within the narrow confines of a rather circumscribed domain given adequate examples, but this generally requires very large data sets. And so, with enough data, you can capture quite a lot of the more interesting patterns therein.
Just Tools
Bottom line, these tools are still that, just tools. They exhibit none of the characteristics to which we attribute intelligence. They could perhaps be better characterized as idiot savants, but that might be deemed a rather low estimation of savants, who are after all still human.
People are wowed by the results being produced by these glorified robots. Really? OK any given human language has some interesting variations that allow us to communicate after a fashion, but really only somewhat. Even with language as a tool, we still struggle to understand each other far too often.
And also, our children learn to speak. And, that’s considered a natural part of becoming human. So, how hard can it be after all? Perhaps it’s time for us to lower our estimation of ourselves as a species. Language is no longer a defining characteristic of our species anyway, nor should it be. Language is just another interesting tool in our arsenal.
Marketing
To my view, so called “artificial intelligence” and “machine learning” are both marketing terms to sell the latest variations of automated pattern matching tools, even those that are generative, like ChatGPT and Copilot. These were derived from lots of prior art around text pattern recognition engines. To be clear: there’s no real intelligence involved here. It’s predicated on large sets of example data from which are distilled out those features in common that best represent how they all match certain selected criteria.
Examining and distilling patterns from large data sets performs a rather exhaustive examination of those cases. This is only a slight improvement over what we in the field sometimes call the brute force approach. This was how DeepMind was applied to chess. There are certainly more elegant solutions to the problems given by chess, but DeepMind isn’t one of them. It’s not about that. Ultimately, DeepMind was a showcase for what could be done with such tools.
It’s important to note that there’s always a marketing element to the naming of these tools. Without that, there would be no funding, and the work on them would likely stop. As long as those involved in developing the tools have sufficient funds, work on them will continue. Given the hype and the level of excitement and controversy it’s generated, that’s likely to continue for quite a while. For example, Microsoft reportedly just pledged another $10 billion to OpenAI.
It frankly feels to me a bit like snake oil.
Fallout?
So, do I feel that my livelihood as a software developer is threatened? NO, not in the least. In coming years, the big tech companies will likely hire fewer code monkeys, so what?
The advent of such tools may require new developers in school to learn more about the importance and development of their communication and social skills. That’s OK. They need those in order to better compete in the job market anyway. I generally advise new developers to learn such things early in their careers.
Advancing State of the Art
Code generation is similarly and historically underutilized. The tools for this have been around a long time, many years now. They require that we notice repeating and recognizable patterns in the code, which rarely happens. I use and promote these tools in my work now. I’m a fan of code generation because it saves us time. When used appropriately and well, code generation tools ensure that all instances of code with very similar patterns all have the correct “shapes” and will work as intended.
And there’s the rub: intention, and understanding. The tools we use, whether marketed as “intelligent” or “learning”, have neither. Human intention and understanding are required for designing and building good software solutions. These are the key elements to be known and shared: understanding and sharing the essential parts of problems. These are the real heart of software designs: conceptual models of problems a solution intends to solve.
EDUCE is tackling some of the harder parts of development that were identified by Fred Brooks back in the mid-80s. How do we make our mental models sharable and comparable? These questions still remain to be solved.
I’d love to see the harder parts of EDUCE automated by a tool chain. Pulling apart technical narratives and building conceptual models from the essential conceptual parts is HARD work. Getting all the people involved in solving problems to share a common understanding of the problems to be solved is HARD work. Domain-driven design DDD is another practice that intends to improve the process, and can be combined with EDUCE.
I’d be happy with even 1/2 order-of-magnitude improvement in the development process, especially in the context of developer teams. That goal is considerably less than that described by Brooks. So, the challenges in this field are still relevant and difficult. And, that’s largely because the really thorny problems are still conceptual, still in the human domain, and deal with human needs, human communication, and will remain so for the foreseeable future.
So yes, I believe the field still has legs as a career choice.