[Ed. note: As we wrapped up the year and geared up for the next, we revisited our top-performing posts of the year. We hope you enjoy this piece, one of our favorites from 2023, as we look forward to bringing you more in 2024.]
Amidst the hype surrounding the rapid advancements in Artificial Intelligence, a wave of anxiety has washed over the software development community. Are we, as programmers, on the brink of obsolescence, soon to be replaced by intelligent machines? The fear is that business executives and product managers will bypass software developers altogether, directly instructing AI to build precisely what they envision. However, as someone with 15 years of experience translating vague specifications into functional software, I find these concerns largely unfounded.
While coding itself can present challenges, I can honestly say I’ve rarely spent more than a couple of weeks debugging a particularly stubborn piece of code. Once you grasp the syntax, logic, and common programming patterns, the act of writing code becomes relatively straightforward – most of the time. The real bottlenecks, the true complexities, invariably lie in understanding what the software is actually supposed to accomplish. The most demanding aspect of software creation isn’t the coding; it’s the meticulous process of defining clear, unambiguous requirements. And crucially, these software requirements remain firmly in the domain of human intellect and communication.
This article will delve into the critical relationship between well-defined requirements and successful software, highlighting what it truly takes to build a robust programming career, even as AI continues to evolve. We’ll explore why focusing on these fundamental aspects will not only future-proof your career but also position you for greater success in the ever-changing tech landscape.
“It’s not a bug, it’s a feature…no wait, it’s a bug” – The Requirement Conundrum
Early in my career as a software engineer, I joined a project already in progress, tasked with boosting the team’s development speed. The software’s core function was to enable the configuration of custom products on e-commerce platforms.
My specific assignment was to generate dynamic terms and conditions. These legal stipulations were conditional, varying based on the product type being purchased and the customer’s location within the US, due to differing state regulations.
During development, I stumbled upon what seemed like a potential flaw. A user could select a product type, generating the correct terms and conditions. However, further along in the workflow, the system allowed the user to switch to a different product type while retaining the initial, predefined terms and conditions. This directly contradicted a feature explicitly outlined and signed off on in the business requirements document.
Naive as I was, I approached the client with a question: “Should I remove the option that allows a user to override the correct terms and conditions?” The response I received is etched in my memory. With unwavering confidence, the senior executive declared:
“That will never happen.”
This was a seasoned executive, deeply familiar with the company’s operations and business processes, chosen to oversee the software project for his expertise. The very functionality to override default terms had been explicitly requested by this same individual. Who was I, a junior developer, to question such authority, especially when this executive represented the client paying for our work? I dismissed my concern and moved on.
Months later, mere weeks before the software’s go-live date, a client-side tester reported a bug and assigned it to me. Upon reviewing the bug details, I couldn’t help but laugh.
The very issue I had flagged – the ability to override default terms and conditions, the scenario deemed impossible – was precisely what was happening. And guess who was tasked with fixing it? And who was subtly blamed for not catching it earlier?
The technical fix was relatively simple, and the bug’s immediate impact was minimal. However, this experience became a recurring pattern throughout my software development career. Conversations with fellow software engineers confirmed that I wasn’t alone. The problems grew in scale and complexity, becoming more challenging and costly to resolve, but the root cause often remained the same: unclear, inconsistent, or simply incorrect requirements.
Confused cartoon character surrounded by question marks
AI Today: Chess Masters vs. Self-Driving Car Challenges
The concept of artificial intelligence is not new, but recent high-profile advancements have ignited media frenzy and sparked debates even in governmental bodies. AI has already achieved remarkable success in specific domains, with chess being a prime example.
AI applications in chess date back to the 1980s. Today, it’s widely accepted that AI algorithms surpass human chess-playing capabilities. This is not surprising when considering that chess operates within finite parameters (though the game itself remains unsolved in a computational sense).
Chess always begins with 32 pieces on a 64-square board, governed by well-defined, universally accepted rules, and, most importantly, with a singular, clear objective: checkmate the opponent’s king. Each turn offers a finite number of possible moves. Playing chess, in essence, is executing a complex rules engine. AI systems excel at calculating the consequences of every potential move, selecting the optimal strategy to capture pieces, gain positional advantage, and ultimately, win.
Another prominent area of AI development is self-driving cars. Manufacturers have been promising autonomous vehicles for years. While some cars now possess self-driving capabilities, they often come with significant limitations. In many situations, the car requires active driver supervision; the driver may need to keep hands on the wheel, indicating the self-driving feature is not truly autonomous.
Similar to chess-playing AI, self-driving cars heavily rely on rule-based engines to make decisions. However, unlike the clearly defined rules of chess, the rules for navigating every conceivable real-world driving scenario are far from complete or unambiguous. Drivers constantly make countless split-second judgments – avoiding pedestrians, maneuvering around obstacles, navigating complex intersections. Getting these judgments right is the difference between a safe arrival and a potential accident.
In technology, high availability is paramount, often measured in “nines” of uptime. Aiming for “five nines” (99.999%) or even “six nines” (99.9999%) of availability is common for critical systems. Achieving the first 99% of uptime isn’t overly challenging. It allows for a considerable downtime of over three days (87.6 hours) per year. However, each additional “nine” exponentially increases the complexity and cost. Reaching 99.9999% availability drastically reduces permissible downtime to a mere 31.5 seconds annually, demanding significantly more rigorous planning, effort, and investment.
Let’s break down the downtime for different availability levels:
- 365 days x 24 hours x 60 minutes = 525,600 minutes per year
- 99% availability: downtime of 5,256 minutes (87.6 hours)
- 99.9% availability: downtime of 526 minutes (8.76 hours)
- 99.99% availability: downtime of 52 minutes (less than 1 hour)
- 99.999% availability: downtime of 5.2 minutes
- 99.9999% availability: downtime of 0.52 minutes (roughly 31.5 seconds)
For self-driving cars, the challenge is achieving a level of reliability and safety that is acceptable. No matter how advanced AI becomes, there will always be a residual risk of accidents and even fatalities. Human drivers themselves are fallible, and accidents are a daily reality. The crucial question is: what accident and fatality rate will governments and society deem acceptable for autonomous vehicles? It undoubtedly needs to be at least as good, if not significantly better, than human driving.
The immense difficulty in reaching this acceptable safety threshold stems from the fact that driving involves vastly more variables than chess, and these variables are not finite. The first 95% or even 99% of driving scenarios might be predictable and manageable for AI. However, the remaining “edge cases” – unexpected events, unusual road conditions, the unpredictable actions of other drivers – are virtually limitless. Consider driving after road repaving when lane markings are missing, encountering unexpected construction, or reacting to sudden weather changes. Training AI to recognize and appropriately respond to these anomalies and edge cases, each subtly different from the last, is an extraordinary challenge.
AI Can Generate Code, But Not (Yet) Create Software
Creating and maintaining software shares far more similarities with driving a car than playing chess. Software development involves a significantly larger number of variables and often relies on nuanced judgment calls rather than rigid rules. While there’s a desired outcome when building software, it’s rarely as singular and well-defined as winning a chess game. Software is rarely ever “finished”; it’s a living entity, constantly evolving with new features, bug fixes, and updates – an ongoing process. In contrast, a chess game concludes definitively with a win or loss.
In software engineering, we employ tools to bring our software designs closer to the tightly controlled rule-based environment of chess. Technical specifications are designed to serve this purpose. At their best, comprehensive technical specs meticulously detail expected user interactions and program workflows – “To buy an e-sandwich, the user clicks this button, which triggers this data structure creation, and initiates this service.” However, the reality is often far from this ideal. Too often, developers are presented with vague wishlists disguised as feature specs, napkin sketches of wireframes, and ambiguous requirements documents, and then instructed to “make their best judgment.”
Even worse, requirements frequently change mid-project or are simply disregarded. Recently, I was asked to assist a team developing a system to disseminate information about COVID-19 related health issues in regions with unreliable internet access. The proposed solution was a survey application delivered via SMS – text messaging. Initially, I was enthusiastic about contributing to this project.
However, as the team described their vision, red flags began to emerge. Asking a retail customer on a scale of 1 to 10 how likely they are to return to a store is vastly different from conducting multi-step surveys with multiple-choice questions about complex COVID-19 symptoms via text message. While I didn’t outright refuse, I raised numerous potential points of failure in this approach and urged the team to clearly define how the system would handle incoming responses for every question. Would answers be comma-separated numbers mapped to options? What would happen if a user’s response didn’t correspond to any of the provided choices?
After thorough consideration of these challenges, the team reached a crucial conclusion: abandoning the project was the most prudent course of action. Believe it or not, I consider this a successful outcome. Proceeding without clear solutions for handling potential data errors and invalid user inputs would have been far more wasteful and ultimately detrimental.
Now, consider applying AI to this scenario. Is the idea to simply have stakeholders directly instruct an AI to create an SMS-based survey? Will the AI proactively ask probing questions about handling data validation, error conditions, and the myriad of potential issues inherent in collecting survey data via SMS? Will it anticipate and address the inevitable human errors and missteps that occur during real-world usage?
To generate functional, robust software using AI, you still need a clear, precise understanding of your desired outcome and the ability to articulate it meticulously. Even when developing software for personal use, developers often don’t fully grasp the nuances and challenges until they begin writing code and encountering unexpected complexities.
Over the past decade, the software industry has largely shifted from the waterfall methodology to agile development practices. Waterfall emphasizes upfront, comprehensive requirement definition before any coding begins, while agile embraces flexibility and iterative adjustments throughout the development process.
Countless waterfall projects have faltered because stakeholders, despite believing they knew exactly what they wanted and could accurately document it, were ultimately disappointed with the final product. Agile methodologies emerged as an antidote to these rigid, often unrealistic, processes.
AI might prove highly effective at rewriting existing software, perhaps to migrate it to newer hardware or a more modern programming language. Many organizations still rely on legacy software written in COBOL, but the pool of COBOL programmers is shrinking. If you possess an exceptionally clear and detailed understanding of existing software functionality, AI could potentially replicate it faster and cheaper than a team of human programmers. I believe AI’s strength lies in recreating software whose requirements are already fully understood and documented, because humans have already painstakingly figured out what that software should do.
AI might even excel at building software using the waterfall process – a methodology sometimes jokingly referred to as a “death march” due to its inherent challenges. Ironically, humans are often ill-suited for the waterfall approach, not because of the coding phase, but because of the critical upfront requirement definition. Artificial intelligence can perform extraordinary feats, but it cannot yet read minds or intuit what you truly need or want from a software system. To build a successful programming career, focus on honing the skills that remain uniquely human: critical thinking, problem decomposition, effective communication, and the ability to understand and translate complex human needs into clear, actionable software requirements. These are the skills that will keep you in demand, regardless of AI advancements.