Fast Fashion for Code: How the Power Grab Masks a Quality Crisis
First published at Tuesday, 31 March 2026
Fast Fashion for Code: How the Power Grab Masks a Quality Crisis
A rocket engine and a crashed car in a desertPhoto by Gabriel Jones on Unsplash
We're witnessing something unprecedented in software development. After decades of slowly accumulating technical debt, AI code generation has arrived, and it's accelerating everything. But this isn't the industrialization of software development that vendors promise. It's something far more familiar, and far more troubling: it's fast fashion [1] .
- 1
- Credit to Samir Talwar (https://functional.computer/) for the "fast fashion for code" metaphor that frames this essay. His original insight sparked this essay. tante independently arrived at the same metaphor in "Software as Fast Fashion" (January 2026, https://tante.cc/2026/01/15/software-as-fast-fashion/), which is worth reading alongside this piece.
And underneath the quality crisis lies something even more troubling: a massive structural shift in power that may lock in this degradation permanently.
The Primark Problem
Think about how fast fashion works. Cheap materials. Rapid production. Wear it twice, throw it away. Ignore the externalities, the environmental waste, the exploitative labor, the disposal culture. Optimize only for speed and cost.
Now look at what's happening with AI-generated code.
Across 211 million changed lines of code from 2020 to 2024, GitClear reports an 8× rise in 5+-line duplicate blocks and a drop in refactoring ('moved' lines), a shift away from reuse toward copy/paste patterns [2] . One technology veteran with 35 years of experience observed he had never seen so much technical debt created in such a short period [3] .
- 2
- GitClear's 2025 report (211M changed lines, 2020-2024) found an 8× rise in 5+-line duplicates and a multi-year decline in 'moved' lines. "How AI generated code compounds technical debt," LeadDev, August 2025. https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt
- 3
- GitClear's second-annual AI Copilot Code Quality research. "How AI generated code compounds technical debt," LeadDev, August 2025. https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt
It's impressively cheap (ignoring all externalised costs) and easy to generate software now, as cheap and easy as buying clothes at Primark and throwing them away after two wears. And just like fast fashion, we're ignoring all the externalities.
The Original Sin: We Never Built the Standards
Here's the uncomfortable truth: software engineering has been broken from the start. While our colleagues in civil, mechanical, and electrical engineering developed rigorous professional standards, licensing requirements, and accountability mechanisms over decades, software took a different path.
Unlike civil/mechanical/electrical, software has no widely adopted licensure regime; in the U.S., NCEES ended the software PE exam in 2019 for lack of demand [4] . In traditional infrastructure, failures can and do lead to criminal trials (e.g., Genoa's Morandi Bridge: 43 fatalities; 18 first-instance convictions in 2024, with related proceedings ongoing) [5] . When software systems fail? Data breach fines are rounding errors. Nobody faces real consequences.
- 4
- NCEES discontinued the U.S. PE exam in Software Engineering after April 2019. National Council of Examiners for Engineering and Surveying. https://ncees.org/ncees-discontinuing-pe-software-engineering-exam/; National Society of Professional Engineers. https://www.nspe.org/career-growth/pe-magazine/may-2018/ncees-ends-software-engineering-pe-exam
- 5
- 2018 collapse (43 deaths); 2024 first-instance convictions for 18 managers; related 2025 trial strand ongoing. "Second Morandi Bridge collapse trial begins in Genoa," Trasporto Europa, January 2025. https://www.trasportoeuropa.it/english/second-morandi-bridge-collapse-trial-begins-in-genoa/
Research examining software companies found an absence of industry practices, professional responsibility code of conduct standards, and guidelines for integrating ethical concerns, with almost all companies having no identification methods for ethical considerations [6] .
- 6
- "Ethical Issues in Software Requirements Engineering," MDPI, February 2022. https://www.mdpi.com/2674-113X/1/1/3
We should have created developer-focused unions while we had power and leverage. We should have established professional standards. We should have demanded accountability. We didn't. Now that leverage is evaporating and we're about to understand just how catastrophically.
The Slow Rot We Normalized
Even before LLMs arrived, software quality was degrading. Research confirms that internal quality issues often exist from the moment a file is first checked into version control, and without active measures, these issues worsen over time [7] .
- 7
- "How to Tackle Technical Debt and Maintain High Software Quality," Qt.io, January 2025. https://www.qt.io/quality-assurance/blog/how-to-tackle-technical-debt; "Technical Debt and Software Erosion," Qt.io. https://www.qt.io/quality-assurance/axivion/technical-debt
We've known about this for decades. Google's engineering satisfaction surveys found no single metric predicts technical debt, identifying common problem areas including dependencies, code quality, migration, and code degradation [8] . The research was there. The warnings were clear.
- 8
- Google EngSat/IEEE Software column (Jaspan & Green, 2025). IEEE Xplore. https://ieeexplore.ieee.org/ielx8/52/11316879/11316905.pdf?arnumber=11316905&isnumber=11316879; "Measuring Developer Productivity Per Google's Research," ShiftMag. https://shiftmag.dev/measuring-developer-productivity-per-googles-research-3739/
But without professional standards or accountability, the industry optimized for speed. Ship fast, break things, accumulate debt. We told ourselves it was innovation. It was really just slowly filling landfills.
The LLM Acceleration: Fast Fashion Goes Digital
Then LLMs arrived, and everything accelerated.
Multiple studies paint a consistent picture: LLMs generate code that often passes functional tests but systematically lacks non-functional qualities. Research shows maintainability is under-examined, with improvements in one quality often degrading another [9] . Developers writing code by hand produce output that more consistently follows coding standards than GPT-4, which tends toward more complex implementations despite passing more tests in some tasks [10] . Analysis of 4,442 Java tasks found bugs, security vulnerabilities, and code smells common across all major models [11] .
- 9
- Review + empirical study: maintainability is under-examined; improvements in one quality often degrade another. "Quality Assurance of LLM-generated Code: Addressing Non-Functional Quality Characteristics," arXiv, November 2025. https://arxiv.org/html/2511.10271v1
- 10
- Study across 72 tasks: human code better adheres to coding standards; GPT-4 often more complex, though passes more tests in some tasks. "Comparing Human and LLM Generated Code: The Jury is Still Out!" arXiv, January 2025. https://arxiv.org/abs/2501.16857
- 11
- Sonar analysis of 4,442 Java tasks shows bugs, security vulnerabilities, and smells common across models. "Assessing the Quality and Security of AI-Generated Code," arXiv, August 2025. https://arxiv.org/html/2508.14727v1
The code works, in the same way a $5 shirt from Primark technically functions as clothing. But the quality? The longevity? The hidden costs?
Academic reviews consistently associate code cloning with higher maintenance cost and defect risk, especially when cloned blocks must be updated across multiple locations [12] . Analysis shows developers are now less likely to reuse previous work, leading to more redundant systems [13] .
- 12
- Systematic review/meta-analysis of code cloning and defect risk. "Code Clone Detection: A Systematic Review and Meta-Analysis," arXiv, June 2023. https://arxiv.org/abs/2306.16171
- 13
- "How AI generated code compounds technical debt," LeadDev, August 2025. https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt
LLM vendors like Anthropic and OpenAI are shifting the power of creation to businesses, but this isn't the industrialization of software development, we already had that. This is the fast-fashionization of software generation: fully degrading quality, creating more disposable code, throwing it against the wall to see what sticks.
The Expertise Pipeline Is Breaking
There's a less visible but potentially more damaging consequence: LLMs are hollowing out the pipeline that produces competent software engineers.
Learning to build software has always required doing the unglamorous work first. Debugging a race condition teaches you things about concurrency that no tutorial can. Untangling a poorly designed schema teaches you why data modelling matters. These are not obstacles on the way to expertise, they are expertise. Every senior engineer's judgment was built on years of wrestling with exactly the kinds of problems that LLMs now shortcut past.
When junior developers generate code instead of writing it, they skip the struggle that builds understanding. They produce working output without developing the mental models that would let them evaluate, debug, or improve that output. The result is a generation of developers who can produce code but cannot reason about it.
This is already visible in adjacent fields. Aviation has studied the problem for decades: pilots who rely heavily on automation lose the manual flying proficiency that matters most when automated systems fail, a phenomenon researchers call 'deskilling', which has been a contributing factor in multiple fatal accidents [14] . And the pattern is now repeating across white-collar work more broadly. An IDC survey found that 66% of enterprises expect to slow entry-level hiring due to AI, while 69% report fewer on-the-job development opportunities for junior employees and 71% report increasing difficulty recruiting and training future leaders because entry-level learning pathways have disappeared [15] . You cannot run an optimised, AI-assisted engineering team if nobody on the team understands the fundamentals well enough to catch the AI's mistakes, and you cannot build that understanding by prompting an LLM.
- 14
- Growing reliance on cockpit automation has reduced manual flying practice, contributing to loss-of-control accidents; the FAA, EASA, and IATA have all issued guidance on rebalancing automation with manual skill retention. "Methods for Preventing the Degradation of Manual Flying Skills in an Automated Cockpit Environment," The Collegiate Aviation Review International, December 2025. https://ojs.library.okstate.edu/osu/index.php/CARI/article/view/10345; "Lost Skills," Flight Safety Foundation, June 2021. https://flightsafety.org/asw-article/lost-skills/
- 15
- IDC survey on behalf of Deel: 66% of enterprises expect to slow entry-level hiring; 69% report fewer development opportunities for juniors; 71% report difficulty training future leaders. "Enterprises are cutting back on entry-level roles for AI," IT Pro, November 2025. https://www.itpro.com/business/careers-and-training/enterprises-are-cutting-back-on-entry-level-roles-for-ai-and-its-going-to-create-a-nightmarish-future-skills-shortage
The consequences compound over time. Today's juniors are tomorrow's seniors, the architects, the people who design the systems that LLMs will be asked to generate code for. If the pipeline that produces them is broken, no amount of AI capability can compensate. You end up with increasingly powerful code generation tools and nobody qualified to direct them. The dependency on LLM vendors doesn't just deepen, it becomes irreversible: not because the tools are indispensable, but because the humans who could work without them no longer exist.
The Structural Lock-In: Why This Won't Fix Itself
The quality crisis and the eroding expertise pipeline are serious on their own. But they are symptoms of a deeper problem: a structural shift in power from the people who build software to the companies that control the means of generating it. Developers have historically held leverage because their skills were scarce and hard to replace. That leverage is now being transferred to a handful of corporations that own the LLM infrastructure everyone increasingly depends on.
Here is how the lock-in works.
The dependency trap: LLMs can only be trained by massively funded companies. The computational infrastructure, the data access, the expertise, these require billions in capital. OpenAI, Anthropic, Google, Microsoft, these aren't tools developers own. They're services we rent from capital.
This creates an impending dependency unlike anything we've seen before. Your ability to be productive increasingly depends on access to models controlled by a handful of corporations. And as developers become less likely to reuse previous work and rely more on AI generation [16] , the dependency deepens.
- 16
- "How AI generated code compounds technical debt," LeadDev, August 2025. https://leaddev.com/technical-direction/how-ai-generated-code-accelerates-technical-debt
The leverage collapse: Individual developers are losing bargaining power at an accelerating rate. When your "skill" becomes knowing how to prompt an LLM that anyone can access, what leverage do you have? The companies that control the models hold all the cards.
We had a window, decades, really, when developers had enough leverage to demand professional standards, to organize, to establish accountability. We didn't use it. We made choices, about licenses, about unions, about professional organization, that gave away our power. Now, as LLMs make individual developers increasingly fungible, that window is closing.
How we gave away our leverage: Look at open source. In 2015, MIT accounted for ~45% of licensed GitHub repos; GPLv2 ~13% [17] . We chose permissive licenses that let companies extract value from our work without requiring anything back. We could have used copyleft licenses (GPL, AGPL, EUPL) that demand reciprocity. We chose "maximizing adoption", which really meant maximizing capital's ability to profit from our labor.
- 17
- GitHub's 2015 analysis of licensed public repos. "Open source license usage on GitHub.com," GitHub Blog, March 2015. https://github.blog/open-source/open-source-license-usage-on-github-com/; GitHub Innovation Graph. https://innovationgraph.github.com/global-metrics/licenses
And now? Research investigating code license infringements in LLM training datasets found significant concerns about the legal implications of using copyrighted data in large-scale training, with many datasets claiming to use only permissively licensed code while actually containing GPL and other copyleft code [18] . LLMs train on everything, ignoring licenses entirely. We gave away our leverage through our choices, and then capital took the rest anyway.
- 18
- "An Exploratory Investigation into Code License Infringements in Large Language Model Training Datasets," arXiv, March 2024. https://arxiv.org/html/2403.15230v1
Why capital won't fix this: And here's the fundamental problem, it won't. History demonstrates this pattern repeatedly: throughout the 1980s and 1990s, organized labor in North America and Western Europe was systematically disempowered as employers consolidated more flexible and lower wage labor markets [19] . In industry after industry, trucking, airlines, railroads, telecommunications, deregulation led to erosion of wages and working conditions as companies competed by cutting labor costs [20] .
- 19
- "Labour market deregulation and the decline of labour power in North America and Western Europe," Policy and Society, September 2008. https://academic.oup.com/policyandsociety/article/27/1/83/6420847
- 20
- "Deregulation and the Labor Market," American Economic Association. https://pubs.aeaweb.org/doi/pdf/10.1257/jep.12.3.111
The lesson is clear: capital will not voluntarily establish quality standards or worker protections. Early unions and their economic resistance efforts were completely unprotected under the law and were often targeted by violent and deadly resistance [21] . Workers had to organize, strike, and fight for every protection, from child labor laws to minimum wages to workplace safety standards. And even when won, these protections are constantly under threat.
- 21
- "Declining Administrative Capacity in Worker Law," Georgetown Law. https://www.law.georgetown.edu/denny-center/blog/administrative-capacity/
In our capitalist economy, it's easier and more profitable to operate at the bottom line of standards than to maintain quality. And we're watching it happen again in software.
Capital always wins (legally): Here's the uncomfortable political reality: law consistently sides with capital. And we have a stark, tragic example of exactly what this means.
Swartz's JSTOR downloads led to criminal prosecution carrying up to 35 years in prison (he died before trial), despite JSTOR not pressing for further action and MIT later reporting on its own role [22] . Today's training-data disputes (e.g., NYT and Authors Guild cases) are proceeding as civil copyright litigation [23] .
- 22
- Swartz faced criminal charges (max 35 years/$1M); current AI data-training fights (NYT; Authors Guild) are civil copyright suits. U.S. Department of Justice. https://www.justice.gov/archive/usao/ma/news/2011/July/SwartzAaronPR.html; JSTOR Statement. http://docs.jstor.org/jstor-statement-misuse-incident-and-criminal-case.html; MIT Swartz Report. https://swartz-report.mit.edu/faq.html
- 23
- "NYT v. OpenAI: The Times's About Face," Harvard Law Review, April 2024. https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/; "AG and Authors File Class Action Suit Against OpenAI," Authors Guild. https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/
His crime? Downloading approximately 4.8 million academic journal articles, research largely funded by public money, with the intention of making them freely accessible. JSTOR itself settled with Swartz and told the United States Attorney's Office they had no further interest in the matter [24] . MIT did not request that federal charges be brought [25] .
- 24
- "JSTOR Statement: Misuse Incident and Criminal Case," JSTOR Evidence. http://docs.jstor.org/jstor-statement-misuse-incident-and-criminal-case.html
- 25
- "MIT releases report on its actions in the Aaron Swartz case," MIT News, July 2013. https://news.mit.edu/2013/mit-releases-swartz-report-0730
Despite this, federal prosecutors pressed forward with charges that could result in 35 years in prison and career-destroying felony convictions [26] . Aaron Swartz died by suicide in January 2013, two days after prosecutors rejected his counter-offer to their plea deal.
- 26
- "Aaron Swartz," Wikipedia. https://en.wikipedia.org/wiki/Aaron_Swartz
Now compare this to OpenAI and Anthropic: They've trained their models on copyrighted content at a scale that dwarfs what Aaron did, not millions of articles, but essentially the entire internet. Not for idealistic purposes of making knowledge free, but for commercial profit. Not as individuals, but as massively funded corporations.
And what consequences do they face? Some lawsuits. Negotiations. Settlements. Business as usual.
No executives facing 35 years in prison. No criminal charges. No prosecutors calling it "stealing" and demanding felony convictions. The legal system treats it as a civil matter, a business negotiation between equals.
This pattern is not an aberration. It is the default relationship between law and concentrated capital. Shoshana Zuboff's analysis of surveillance capitalism documents how corporations systematically claimed new territories of human experience as free raw material, and how legal and regulatory systems consistently failed to intervene until the extraction was already entrenched [27] . The dynamic is structural: individuals who challenge capital face the full weight of the criminal justice system; corporations that extract at scale face civil proceedings and negotiated settlements. The law does not treat them differently by accident. It treats them differently by design.
- 27
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
Aaron Swartz: 35 years for downloading academic papers to make knowledge free.
OpenAI: Lawsuits and negotiations for training on copyrighted content at massive scale for profit.
This is what "law sides with capital" means in practice. An individual idealist gets destroyed by the full force of the criminal justice system. Billion-dollar companies get to negotiate.
The end state: Software as pure extraction: Without standards, without accountability, without developer leverage, where does this end? Software becomes purely a vehicle for capital extraction. Not value creation, extraction.
The code quality doesn't matter as long as it ships. The technical debt doesn't matter because developers are replaceable and maintenance is someone else's problem. The security vulnerabilities don't matter because the fines are negligible and the legal liability flows downhill, never to capital.
The copyright violations that would destroy an individual's life become just another cost of doing business for corporations.
We're not building infrastructure for society. We're building disposable extraction mechanisms that will be thrown away the moment they stop generating profit, leaving everyone else to deal with the consequences.
The Costs We're Ignoring
Just like fast fashion, we're pushing all the costs onto someone else:
Environmental costs: When software support ends, otherwise functional hardware can become non-viable, a pattern scholars call 'software obsolescence' [28] . But LLMs add a massive new environmental burden. Recent benchmarking estimates that a short GPT-4o query consumes ~0.43 Wh, while the most energy-intensive models exceed 29 Wh on long prompts [29] . Training GPT-3 consumed an estimated 700,000 liters of water; inference uses ~500 ml per 20-50 prompts [30] . Lifecycle analysis of model development found ~493 metric tons of carbon emissions and ~2.769 million liters of water [31] .
- 28
- Scholarly discussion; vendor lifecycle pages illustrate support sunsets as triggers of practical obsolescence. "Repair and Software: Updates, Obsolescence, and Mobile Culture's Operating Systems," Discard Studies, April 2017. https://discardstudies.com/2017/04/28/repair-and-software-updates-obsolescence-and-mobile-cultures-operating-systems/; Microsoft Lifecycle Policy. https://learn.microsoft.com/en-us/lifecycle/end-of-support/end-of-support-2026
- 29
- Short GPT-4o query ~0.42-0.43 Wh; some long-prompt models >29 Wh. "How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference," arXiv, May 2025. https://arxiv.org/abs/2505.09598
- 30
- GPT-3 training ~700,000 L; ~500 ml per 20-50 prompts for inference (estimate, conditions vary). "Making AI Less Thirsty," Communications of the ACM. https://cacm.acm.org/sustainability-and-computing/making-ai-less-thirsty/
- 31
- Series of LLMs: ~493 tCO₂e and ~2.769 M L water (includes development and hardware manufacturing). "Holistically Evaluating the Environmental Impact of Creating Language Models," arXiv, March 2025. https://arxiv.org/html/2503.05804v1
There is a direct parallel here to fast fashion's own split between visible and invisible damage. The clothing graveyards of the Atacama Desert in Chile, an estimated 39,000 tonnes of discarded garments piled into a waste field nearly three square kilometres across, large enough to photograph from the air, generate outrage because they can be seen [32] . Ghana's Kantamanto market, where 15 million garments arrive weekly from the Global North and the overflow has made the adjacent Korle Lagoon what Ghana's own president describes as the most polluted site in the country, generates outrage because it can be visited, filmed, and reported on [33] . Fast fashion's carbon output, roughly 10% of global greenhouse gas emissions, more than all aviation and maritime shipping combined, generates far less outrage, because it is in the atmosphere, distributed, invisible until measured.
Software's environmental externalities follow the same pattern exactly. E-waste from accelerated hardware obsolescence, the water consumed by inference queries, the carbon emitted by training runs: none of it piles up somewhere photographable. It diffuses through supply chains, data centres, and atmospheric chemistry. The invisibility is structural, and it does much of the work of keeping things unchanged.
- 32
- ~39,000 tonnes of clothing dumped annually in Chile's Atacama; ~3 km² waste field visible from the air. "Chile's Atacama Desert has become a fast fashion dumping ground," National Geographic, 2021. https://www.nationalgeographic.com/environment/article/chile-fashion-pollution
- 33
- 15 million garments arrive weekly at Kantamanto; Korle Lagoon described by President Mahama as the country's most polluted spot. "The Race to Upcycle Africa's Fast Fashion Dumping Ground," TIME, August 2025. https://time.com/7307662/ghana-africa-fast-fashion-waste-pollution/
As we generate more and more disposable code with LLMs, we're compounding both the software waste problem and the massive environmental cost of the AI infrastructure itself.
Maintenance burden: Empirical research found that technical debt has both direct and indirect negative impacts on software quality, with larger projects experiencing more severe adverse effects [34] . Case studies and reviews report substantial maintenance and productivity benefits when teams actively manage technical debt through continuous refactoring, static analysis, and migration off obsolete components [35] , but we're moving in the opposite direction.
- 34
- "EFFECT OF TECHNICAL DEBT ON SOFTWARE QUALITY: MEDIATING ROLE OF CODE MAINTAINABILITY AND MODERATING ROLE OF PROJECT SIZE," Spectrum of Engineering Sciences, December 2025. https://thesesjournal.com/index.php/1/article/view/1794
- 35
- Case studies and reviews report substantial maintenance and productivity benefits when teams actively manage TD. "Technical Debt Management: The Road Ahead for Successful Software Delivery," arXiv, March 2024. https://arxiv.org/html/2403.06484v1
Security and reliability: Security remains a significant concern as LLMs generate code based on patterns learned from large datasets which often contain coding flaws [36] . We're mass-producing vulnerabilities while simultaneously becoming more dependent on the companies that created this problem.
- 36
- Peer commentary on challenges in using LLMs for code gen/repair. "Security and Privacy Considerations for Code Generation with Large Language Models," MIT/IEEE, 2025. https://web.mit.edu/ha22286/www/papers/IEEESP25_1.pdf
Reclaiming Agency
Software runs hospitals, power grids, financial systems, transportation, communication, everything. The stakes have never been higher. And we're treating it like disposable fashion while simultaneously handing all the power to capital.
Other engineering disciplines learned this lesson the hard way. The history of traditional engineering shows that rigorous professional standards and accountability mechanisms emerged after major disasters. But with software, the "disasters" are distributed, invisible, or blamed on users. We haven't had our Morandi Bridge moment, or rather, we've had thousands of them, but none dramatic enough to force change.
Consider what happens when accountability does exist. The Grenfell Tower fire in London in 2017 killed 72 people. The subsequent public inquiry found that the building's cladding, chosen to save money, was the primary cause of the fire's rapid spread. The inquiry led directly to the Building Safety Act 2022, which created new regulatory bodies, established clearer lines of accountability for building safety, and gave residents stronger legal standing [37] . Criminal proceedings against individuals and corporations involved are ongoing [38] . The disaster was horrific, but the visibility of the harm, 72 people dead in a single building, a fire broadcast live, made it impossible for the system to look away. Standards changed because accountability was inescapable.
- 37
- UK Building Safety Act 2022, enacted in response to the Grenfell Tower Inquiry. UK Parliament. https://www.legislation.gov.uk/ukpga/2022/30/contents/enacted
- 38
- Grenfell Tower Inquiry final report published September 2024; criminal investigation and charges ongoing as of 2025. Grenfell Tower Inquiry. https://www.grenfelltowerinquiry.org.uk/
Software failures lack this visibility. When 147 million people have their financial records exposed in a breach like Equifax's in 2017 [39] , the harm is diffuse and delayed: identity theft surfaces months later, credit scores erode quietly, financial distress accumulates in private. Research into the real impact of data breach victimisation documents financial, emotional, and psychological harms persisting across years [40] . In the United States in 2020, losses from identity fraud totalled an estimated $56 billion; the average victim lost around $1,100 [41] . The disaster is real. It just doesn't photograph.
This is precisely why the pressure for accountability never builds the way it does when concrete hits the water. The software industry has had thousands of bridge collapses. They happen in slow motion, distributed across millions of people, invisible to anyone outside those individuals. And so nothing changes.
- 39
- Equifax 2017 breach: ~147 million people's records exposed. "What is PII Data?" BigID. https://bigid.com/blog/what-is-pii-data/
- 40
- Data breach victimisation linked to financial, emotional, health, and relationship harms, persisting beyond the initial incident. "Beyond fraud and identity theft: assessing the impact of data breaches on individual victims," Taylor & Francis, 2025. https://www.tandfonline.com/doi/full/10.1080/0735648X.2025.2535007
- 41
- 2020 US identity fraud losses ~$56 billion total; ~$1,100 average per victim. "What is Personally Identifiable Information (PII)?" Security.org. https://www.security.org/identity-theft/what-is-pii/
And now, the structural conditions that might have enabled standards are disappearing. When law sides with capital, when developers lose leverage, when quality becomes optional because code is disposable, who will enforce standards? Who even has the power to demand them?
Here's what we need, urgently:
Quality standards with teeth: Actual enforceable standards like ASTM, ASME, and ISO that govern traditional engineering. Not voluntary guidelines, requirements. And not self-regulation by the companies extracting value, but independent oversight.
Professional accountability: Licensing, certification, consequences for shipping unsafe code. If software engineers lack a professional body to support their refusal to ship unsafe code and risk job loss without recourse [42] , we need to build that body now, while we still have the leverage.
- 42
- "Why Software Engineers are Powerless to Keep You Safe," Cybersecurity Tribe, September 2025. https://www.cybersecuritytribe.com/articles/why-software-engineers-are-powerless-to-keep-you-safe
Union organization: This is no longer optional. We should have done this years ago when we had leverage. The window is closing fast as LLMs make individual developers more replaceable and dependent on capital-controlled infrastructure. This might be our last chance. History shows that workers must organize to establish standards, capital will not do it voluntarily.
Better governance and licensing practices: We need to stop making capital extraction frictionless. Use copyleft licenses (GPL, AGPL, EUPL) for new projects that demand reciprocity. Build governance structures that can't be overridden by capital. These are small levers, but they're levers we still control.
Legal frameworks that protect labor, not just capital: We need laws that establish real accountability for software failures, that protect developers who refuse to ship unsafe code, that prevent the legal system from destroying individuals while giving corporations a pass. Aaron Swartz should haunt us until we fix this.
The Choice Is Ours, For Now
The current charges for data loss and security failures are so insignificant that nobody has really cared about software quality. We've never been held accountable for the software we've been developing. And as capital consolidates control over the means of software production through LLMs, that accountability will only decrease.
Aaron Swartz faced 35 years for trying to free academic knowledge. OpenAI faces... negotiations.
LLM-generated software isn't making this better, it's making it catastrophically worse while simultaneously shifting all the power to the companies that profit from the degradation.
But here's the thing about crossroads: you can still choose your direction. We can establish professional standards, demand accountability, organize collectively, and fight for public infrastructure. Or we can watch as LLMs turn software development into pure extraction, quality becomes irrelevant, and developers become completely fungible labor with zero leverage.
The research is clear. The trends are alarming. The structural dynamics are brutal. The legal system has shown us exactly whose side it's on. The choice is ours.
But the window is closing.
What will we choose?
Subscribe to updates
There are multiple ways to stay updated with new posts on my blog: