Rendered at 15:33:14 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
jackdoe 9 hours ago [-]
God damn metanoia.
I feel like the internet is programming me.
At this point it is impossible to tell if AI writes like people or people write like AI.
jnpnj 8 hours ago [-]
I personally noted that I'm starting to use some LLM idioms "it's not just .. it's .." and I don't like it. I'm actually trying to stop using computers and read books to replenish my mind with more diverse idioms.
jackdoe 8 hours ago [-]
same, I also try not to read claude's output that much, and I have a copy of Gibson's Mona Lisa and just open it while it is thinknig, for music and even for CS stuff, I search with before:2022 on youtube
but the ship has sailed :)
there is no hiding from it
of course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.
jnpnj 6 hours ago [-]
> before:2022 on youtube
funny trick. similarly when I use LLMs I try to make them emulate people's writing patterns from previous eras.
jackdoe 6 hours ago [-]
it does, but it doesnt; there is a subtle collapse i think
wood_spirit 7 hours ago [-]
I write bad, but my text editor is putting little grammar and spelling squiggly lines under everything and I click through them and end up with very AI-like text. My emails even end up with emdash in them. It’s to shrug. You don’t know if text today is completely prompted or is just cleaned up by modern grammar and spell checkers?
throwanem 6 hours ago [-]
Sure I do. You're almost good enough, at pretending to lousy construction, to have fooled me. Use more words next time; the semiliterate invariably mistake volume for quality.
allanmacgregor 4 hours ago [-]
> Software is quietly becoming a probabilistic system, and almost no one is saying it out loud.
AI generated or at least heavily edited would be my guess. Although, I'm with you at this point hard to tell, I'm seeing those AI filler phrases or over use words like "here is what actually happening" more and more and not only on blog posts but social media, video content, podcasts.
grebc 9 hours ago [-]
Tim’s definitely artificial.
tra3 10 hours ago [-]
> Agents are opening pull requests, reviewing each other's work, and closing them without a human ever touching the keyboard, with a continuously live log monitoring loop to rapidly fix issues.
I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?
I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.
If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
Flux159 9 hours ago [-]
I think the reason we're not seeing many examples yet is that the full loop doesn't work completely autonomously yet. There's still a human in the loop at some critical points - specifically testing against a spec (runtime testing if say working on web or mobile app before shipping to users). LLMs can do compile time testing and validation, unit tests, and can write your end to end tests, but if you're shipping software to users, there's still a human somewhere involved. This isn't even mentioning marketing and actually getting your software into the hands of users - which while it can be automated, a lot of marketing with AI is still sloppy.
jcims 9 hours ago [-]
No idea how automated it is but it's clearly accelerated since last Dec.
How do you know that there aren't? If you had a "robot software factory" that worked, and you were certain it was a source of not just lifechanging or generational but potentially centenary wealth - well.
There was a time in my life when I too would give such a thing away free, on the idea that those who might do some good with it may make up for the ones who will certainly turn it to great evil. After 30 years' exposure, some consensual, to Bay Area/Silicon Valley "culture," I am no longer so sweetly naïve.
clapthewind 5 hours ago [-]
I'm seeing this in some teams I am aware of. It is usually a 3-4 people team working very closely. They're not using gas-town or such, but are typically creating abstraction after abstraction for reviews and assimilating changes (usually with a claude 20x account). They are human in the loop until the system stabilizes and needs no further AI.
It was like science fiction becoming aware of this pattern, but as the OP says, this is indeed happening. Going to change the shape of tech careers for sure. my 2c.
clapthewind 5 hours ago [-]
BTW the skill to develop to direct your career towards this: build deep understanding of one part of a domain, develop thinking in abstractions and systems, follow TDD in all agentic dev (converts probabilistic to deterministic).
notpachet 9 hours ago [-]
Counterargument. The author is primarily looking at AI trend lines. Let's say our industry continues moving along alternate, equally compelling, trend lines: increasing global volatility, chaos in the energy markets, growing likelihood of great power conflict this century, climate collapse, mass migration, societal unrest, yada yada.
What happens to all of these AI-native companies if the AI bubble is not able to survive in these conditions? If your current development process is built on the metabolic equivalent of 400kg of leaves per day[0], then when the allegorical asteroid hits, you're going to be outperformed by smaller, nimbler companies with much lower resource requirements. Those companies may be better suited for survival in hostile macro conditions.
In other words, I think a lot of companies believe that they're trimming their metabolic fat by replacing engineers with AI. Lower salary costs! But at the same time, they're also increasing their reliance on brittle energy infrastructure that may not survive this century. (Not to mention the brittleness of the semiconductor fabrication pipeline, RAM availability, etc)
Predicting the future isn't about being right tomorrow, it's about selling you something today. - read that somewhere
Folks using AI aren't interested in the future, they are interested in buying today and maximizing profits today. If something goes wrong tomorrow, then that's when the problems are dealt with: tomorrow.
AI is an incredibly fragile technology, as you say, it's depended on so many things going right, amazing stuff that it works at all. That fragility includes price, once that goes up and developer price goes down, the winds of change might blow again.
AI also forces folks to be online to code, without being online, companies cannot extend their products. Git was the first version (open source) control system that worked offline. We're literally turning back the hands of time with AI.
AI is another vendor lock-in with the big providers being the sole key-holders to the gates of coding heaven. Folks are blindly running into the hands of vendors who will raise prices as soon as their investors demand their money back.
AI is "improving" code bases to make subtle errors and edge cases harder to detect debugging without using AI will be impossible. Will a human developer actually be able to understand a code base that has been coded up by an AI? That's a problem for tomorrow, today we're making the profits and pumping up the shareholder value.
AI prompts are depended on versions of LLMs - change the LLM and the prompt might will generate different code. Upgrade LLMs or change prompts and suddenly generated code degrades without warning. But prompts are single-use one-way technology: once the generated code is in the code base, there is no need for the prompt - so that's non-issue, only for auditors.
Having come from levers, to punch cards, to transistors, to keyboards, to mice and finally AI, programming has fundamentally forgotten there is a second dimension. Most fields have moved to visual representation of data - graphs, photos, images, plans etc. Programming is fundamentally a single dimension activity with lines and lines of algorithmic code. Hard to understand and harder to visualize (see UML). Now AI comes along and entrenches this dependency on text-based programming, as if the keyboard is the single most (and only) important tool for programming.
It's a lack of imagine of exploring alternatives for programming that has lead us here. Having non-understandable AI tools generating subtly failing code that we blindly deploy to our servers is not an approach that promises look term stability.
tkocmathla 3 hours ago [-]
> AI also forces folks to be online to code
This isn't true in the broad sense you've used. It's true that most people don't have the hardware to run the bleeding-edge foundation models, but with a modest Macbook you can run very capable local models now (at least capable for coding, where my experience is).
Towaway69 3 hours ago [-]
Here I was talking of the AI vendors - they specifically provide inferior models for local usage while offering the "insanely" good models only online.
AI can be run locally but with the growth of agent factories, this is going to be less and less possible if you want to keep up with the Jones.
andsoitis 5 hours ago [-]
> AI is "improving" code bases to make subtle errors and edge cases harder to detect debugging without using AI will be impossible. Will a human developer actually be able to understand a code base that has been coded up by an AI?
Huh? It’s just code that you can read. Why do you think the code will be impregnable by a team of human minds?
Towaway69 3 hours ago [-]
Because code does not include the thought processes that went into creating the final code. Take a second and have a look at the Linux kernel code base and get into that. It's surprising how some code only make sense if you understand the bigger picture.
So it will be with AI code that has just been generated and blindly added to the code base. It makes everything work but sometimes, perhaps not always, the devil lies in the details.
Take any book, open it up to a random middle section, read it. I can read the words but I don't understand the story. And so it is with code.
coffeefirst 5 hours ago [-]
Let’s also be clear: the asteroid doesn’t even need to be an energy crisis.
If two money-losing companies decide that they would like to make money, the math gets ugly fast.
grebc 9 hours ago [-]
The one thing that’s true in that article is the output of bad coders/programmers/developers/engineers is certainly increasing.
Good luck to anyone cleaning up the mess.
tacker2000 7 hours ago [-]
Actually i know an engineer at a startup that was hired to clean up the original slopcoded MVP.
So there is also opportunity in this space.
dingdongditchme 7 hours ago [-]
learned a new word today: "slopcoded MVP", love it! Thanks.
I feel like the internet is programming me.
At this point it is impossible to tell if AI writes like people or people write like AI.
but the ship has sailed :)
there is no hiding from it
of course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.
funny trick. similarly when I use LLMs I try to make them emulate people's writing patterns from previous eras.
AI generated or at least heavily edited would be my guess. Although, I'm with you at this point hard to tell, I'm seeing those AI filler phrases or over use words like "here is what actually happening" more and more and not only on blog posts but social media, video content, podcasts.
I know gas town made a splash here a while back and some colleagues promote software factories, but I haven’t seen much real output..have any of you?
I prefer the guided development approach where it’s a pretty detailed dialog with the LLM. The results are good but it’s hardly hands off.
If I squint I can almost see this fully automated development life cycle, so why aren’t there real life examples out there?
https://code.claude.com/docs/en/changelog
There was a time in my life when I too would give such a thing away free, on the idea that those who might do some good with it may make up for the ones who will certainly turn it to great evil. After 30 years' exposure, some consensual, to Bay Area/Silicon Valley "culture," I am no longer so sweetly naïve.
It was like science fiction becoming aware of this pattern, but as the OP says, this is indeed happening. Going to change the shape of tech careers for sure. my 2c.
What happens to all of these AI-native companies if the AI bubble is not able to survive in these conditions? If your current development process is built on the metabolic equivalent of 400kg of leaves per day[0], then when the allegorical asteroid hits, you're going to be outperformed by smaller, nimbler companies with much lower resource requirements. Those companies may be better suited for survival in hostile macro conditions.
In other words, I think a lot of companies believe that they're trimming their metabolic fat by replacing engineers with AI. Lower salary costs! But at the same time, they're also increasing their reliance on brittle energy infrastructure that may not survive this century. (Not to mention the brittleness of the semiconductor fabrication pipeline, RAM availability, etc)
[0] https://en.wikipedia.org/wiki/Apatosaurus
Folks using AI aren't interested in the future, they are interested in buying today and maximizing profits today. If something goes wrong tomorrow, then that's when the problems are dealt with: tomorrow.
AI is an incredibly fragile technology, as you say, it's depended on so many things going right, amazing stuff that it works at all. That fragility includes price, once that goes up and developer price goes down, the winds of change might blow again.
AI also forces folks to be online to code, without being online, companies cannot extend their products. Git was the first version (open source) control system that worked offline. We're literally turning back the hands of time with AI.
AI is another vendor lock-in with the big providers being the sole key-holders to the gates of coding heaven. Folks are blindly running into the hands of vendors who will raise prices as soon as their investors demand their money back.
AI is "improving" code bases to make subtle errors and edge cases harder to detect debugging without using AI will be impossible. Will a human developer actually be able to understand a code base that has been coded up by an AI? That's a problem for tomorrow, today we're making the profits and pumping up the shareholder value.
AI prompts are depended on versions of LLMs - change the LLM and the prompt might will generate different code. Upgrade LLMs or change prompts and suddenly generated code degrades without warning. But prompts are single-use one-way technology: once the generated code is in the code base, there is no need for the prompt - so that's non-issue, only for auditors.
Having come from levers, to punch cards, to transistors, to keyboards, to mice and finally AI, programming has fundamentally forgotten there is a second dimension. Most fields have moved to visual representation of data - graphs, photos, images, plans etc. Programming is fundamentally a single dimension activity with lines and lines of algorithmic code. Hard to understand and harder to visualize (see UML). Now AI comes along and entrenches this dependency on text-based programming, as if the keyboard is the single most (and only) important tool for programming.
It's a lack of imagine of exploring alternatives for programming that has lead us here. Having non-understandable AI tools generating subtly failing code that we blindly deploy to our servers is not an approach that promises look term stability.
This isn't true in the broad sense you've used. It's true that most people don't have the hardware to run the bleeding-edge foundation models, but with a modest Macbook you can run very capable local models now (at least capable for coding, where my experience is).
AI can be run locally but with the growth of agent factories, this is going to be less and less possible if you want to keep up with the Jones.
Huh? It’s just code that you can read. Why do you think the code will be impregnable by a team of human minds?
So it will be with AI code that has just been generated and blindly added to the code base. It makes everything work but sometimes, perhaps not always, the devil lies in the details.
Take any book, open it up to a random middle section, read it. I can read the words but I don't understand the story. And so it is with code.
If two money-losing companies decide that they would like to make money, the math gets ugly fast.
Good luck to anyone cleaning up the mess.