How Vibe code is killing open source
I don't agree with this article. Interesting as it is.

“Vibe coding” here is defined as software development that is assisted by an LLM-backed chatbot, where the developer asks the chatbot to effectively write the code for them. Arguably this turns the developer into more of a customer/client of the chatbot, with no requirement for the former to understand what the latter’s code does, just that what is generated does the thing that the chatbot was asked to create.
That paragraph is so wrong IMHO. You can't trust the AI as its prompt driven, which infers that if the prompt is incomplete or wrong, the code it generates is also wrong.
So how do you know that the code is wrong because you need to be able to read it, and test the end results implying that the developer is not just the customer/client but the developer. He or she needs to know what they are doing, and is the one setting the goals to be achieved.
I have been led up the garden path so many times only to find that the code produced via the LLM is incomplete or I have overlooked problems that I was not aware of at time of generation, or the LLM has limited or outdated (common) data, on which to base its conclusions.
It’s also a topic that is highly controversial, ever since Microsoft launched GitHub Copilot in 2021. Since then, we saw reports in 2024 that ‘vibe coding’ using Copilot and similar chatbots offered no real benefits unless adding 41% more bugs is a measure of success.
This I agree with and I have no argument with the rest of the article. It's not my fault that Copilot for GitHub is faulty. The solution is to simply change your LLM to say - Claude, which in the fortnight I have been using it is significantly better, or continuously query the results of the LLM mentioned. The quoted paragraph also proves that the developer had to read and understand the bugs the LLM introduced.
I use Claude to increase my productivity in relation to writing code, proofing code, speed up research, explaining topics I don't fully understand. This is not limited to just coding.
I have not fully digested the following link, but it's an eye opener.
The creator of Claude Code just revealed his workflow, and developers are losing their minds
"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a prominent voice in the developer community. Kyle McNease, another industry observer, went further, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment."
The excitement stems from a paradox: Cherny's workflow is surprisingly simple, yet it allows a single human to operate with the output capacity of a small engineering department. As one user noted on X after implementing Cherny's setup, the experience "feels more like Starcraft" than traditional coding — a shift from typing syntax to commanding autonomous units.
So, using a LLM is considered mandatory if you are a developer or programmer, and understanding it well.
This just came in
Exclusive: Anthropic's new model is a pro at finding security flaws
Anthropic's latest AI model has found more than 500 previously unknown high-severity security flaws in open-source libraries with little to no prompting, the company shared first with Axios.
AI is not killing open source. It is enhancing it.
My code is a constant work in progress, finding and fixing bugs. I do not agree that AI is killing open source. The article mentions that the use of forums and various code repositories is declining, this is because people are not spending hours trolling the internet looking for solutions to "code" or questions but are simply using the LLM to do it for them. The forum and repositories still need to exist for the LLM to ingest. Chicken and egg scenario.
Significantly faster and more efficient.
I personally believe that AI will lead to a bloom in open-source projects, as those interested in the field will create projects that other people will contribute too in various ways. Gone are the days of large software engineering departments, with AI doing more of the work, guided by the goals set by humans. How this plays out I don't know. But it's happening.
AI is being used by everyone in nearly every field, and I see no reason for this to slow down or stop - with exception of the terminator scenario.
If anything, AI will put various people out of work. This will be especially relevant when robots have uptimes exceeding the normal working day and are plugged into a hive AI or even are capable of human tasks with their inbuilt "intelligence", with procurement costs declining as production is ramped up to satisfy demand.
Welcome to a new age, whether you like it or not.
#enoughsaid
