It seems at this point, everyone and their mother, i.e. "We", are building the "tools" for which "we" mostly hope that the VC money will materialise. Use-cases are not important - if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
> if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
For the huge amounts of capital already burnt, and another 1T in CapEX RPO being announced last few weeks and months, isnt that too many of "potentially" and "ultimately" unspecific qualifiers you are throwing around here? Reminds a lot of Sam Altmans classic lines of unspecific statements like "Codex is so good" or "I can only imagine how good it will get" by the end of 202x (insert year of the decade according to your own preference). After 10+ years of OpenAI and 4+ years of ChatGPT, why is the potential not materialising ?
Counter argument just to play devil’s advocate. Is that forming LLMs into useful shapes could become the game. If it turns out to be impossible to build a real moat around making LLMs - like maybe China or just anyone will ultimately be able to run them locally / cheaply, then the game of spending a billion dollars training one is much more risky
I don't think your Github example is accurate. The vast majority of developers started using git after Github became a thing. They may have used svn or another type of collaboration system before, but not git. And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
My memories are different. Git became amazing on it's own and was a big advantage over SVN. GitHub was "a open source" thing in the beginning. No company here had the idea to host proprietary closed source code on another platform they do not have control over. This eventually became a thing later though and the mindset shifted.
I think you're both right. Post-Github, a lot of Git's adoption came from Github. But Github "worked" because a lot of people were already using Git and Github offered them amazing value, and that initial userbase created a viral effect: People increasingly came into contact with Github via projects hosted there, and those who did not already use Git picked it up as a result of that.
And now many companies do have the idea of hosting proprietary code on a shitty, buggy, closed-source platform they have no control over. Indeed a shifted mindset. Maybe it wasn't shitty, buggy and closed-source enough before.
> And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
Github has always been mediocre and forgettable outside of convenience that you might already have an account on the site. Svn was just shitty compared to git, and cvs was a crime against humanity.
I have to hard disagree on that. I know of many developers personally who were on Source Forge and Google Code before and migrated to GitHub specifically because they offered git
I don't think SVN and Mercurial were more widely used than git before Github became popular, but Github definitely killed off most of the use of those.
Git had already replaced perforce and svn most everywhere I'd seen, before GitHub came along. CVS was still horrible and in a lot, though.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another.
StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
Lots of people were using svn and mercurial was also coming up around the time that GitHub launched. Both git and GitHub were superior to all the other options but for many people they did the switch to GitHub and git at the same time.
Of the thousands, a handful will prevail. Most of it is vaporware, just like in any boom. Every single industry has this problem; copy-cats, fakes & frauds.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
I'd bet that less people had their source code on git in 2008 than the number of developers using the various coding agents today. And the open-source project that we published today hooks into the existing workflow for those developers, in Claude Code and in Gemini CLI. Time will tell the rest. We will publish regular updates and you can judge us on those results.
At least for me, I have felt like the chat history in an agent is often times just as important and potentially even more important than the source code it generates. The code is merely the compiled result of my explanations of intent and goals. That is, the business logic and domain expertise is trapped in my brain, which isn't very scalable.
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
I'm building an "improvement agent" that kinda embraces that. It starts out by running exploration across a codebase or set of documents and extract possible goals, and a vision from that. It then starts producing improvement plans (tickets, effectively). If it gets things wrong, I nudge it in the right direction, and it gets incorporated into revisions of the documents via a review stage. It's an experiment for now, but it is both doing semi-self-directed implementation and helping me identify where my thoughts haven't been crystallised enough by seeing where it fails to understand what I want.
I'm not just running it on code, but on my daily journal, and it produces actionable plans for building infrastructure to help me plan and execute better as a result.
Natural language is in fact a terrible way to express goals, it is imprecise, contradictory, subjective, full of redundancies and constantly changing. So possibly the worst format to record business rules and logic.
This lesson has been learned over and over (see AppleScript) but it seems people need to keep learning it.
We use simple programming languages composed of logic and maths not just to talk to the machine but to codify our thoughts within a strict internally consistent and deterministic system.
So in no sense are the vague imprecise instructions fed to LLMs the true source code.
You don't need a workflow. The agent is the workflow. That's the idea at least. Probably not a great idea IMHO, because producing high quality code is the main difficulty of programming. Everything else, committing to git, deploying etc, pale in comparison.
I do not think that's how it worked out for GitHub: I'd rather say that Git (as complex as it was to use) succeeded due to becoming the basis of GitHub (with simple, clean interface).
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
Yes to all that. And GitLab the company was only founded in 2014 (OSS project started in 2011) and ran through YC in 2015, seven years after GitHub launched.
The goal here is just to piggyback on the AI bandwagon, gather a lot of funding, create a product nobody understands but that sparks imagination, and sell it to FAANG.
Nobody cares if it makes sense, it just has to appear futuristic and avant-garde.
We’re building to milk the bitch while the hype is at the top. Anyone who seriously believes agents are capable of operating completely autonomously right now without any human supervision is delusional.
HN is full of AI agents hype posts. I am yet to see legitimate and functional agent orchestration solving real problems, whether it is scale or velocity.
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
I mean, pretty much all big startups begin as "niche" things that people might care about later. Tesla, Airbnb, Twitch... and countless failures too. It's just how the game is.
We are building tools and hoping an exit materializes. There’s so much funny money in AI right now, getting life-altering money seems easily attainable
It seems at this point, everyone and their mother, i.e. "We", are building the "tools" for which "we" mostly hope that the VC money will materialise. Use-cases are not important - if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
Because "we" are just wrappers around OpenAI's model.
> if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
For the huge amounts of capital already burnt, and another 1T in CapEX RPO being announced last few weeks and months, isnt that too many of "potentially" and "ultimately" unspecific qualifiers you are throwing around here? Reminds a lot of Sam Altmans classic lines of unspecific statements like "Codex is so good" or "I can only imagine how good it will get" by the end of 202x (insert year of the decade according to your own preference). After 10+ years of OpenAI and 4+ years of ChatGPT, why is the potential not materialising ?
5 replies →
Counter argument just to play devil’s advocate. Is that forming LLMs into useful shapes could become the game. If it turns out to be impossible to build a real moat around making LLMs - like maybe China or just anyone will ultimately be able to run them locally / cheaply, then the game of spending a billion dollars training one is much more risky
I don't think your Github example is accurate. The vast majority of developers started using git after Github became a thing. They may have used svn or another type of collaboration system before, but not git. And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
My memories are different. Git became amazing on it's own and was a big advantage over SVN. GitHub was "a open source" thing in the beginning. No company here had the idea to host proprietary closed source code on another platform they do not have control over. This eventually became a thing later though and the mindset shifted.
I think you're both right. Post-Github, a lot of Git's adoption came from Github. But Github "worked" because a lot of people were already using Git and Github offered them amazing value, and that initial userbase created a viral effect: People increasingly came into contact with Github via projects hosted there, and those who did not already use Git picked it up as a result of that.
And now many companies do have the idea of hosting proprietary code on a shitty, buggy, closed-source platform they have no control over. Indeed a shifted mindset. Maybe it wasn't shitty, buggy and closed-source enough before.
Coming from Subversion, git was already so amazing without GitHub, so I'll kindly disagree with you on that front.
> And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
Github has always been mediocre and forgettable outside of convenience that you might already have an account on the site. Svn was just shitty compared to git, and cvs was a crime against humanity.
> Github has always been mediocre and forgettable outside of convenience that you might already have an account on the site.
Completely agree. I moved out of GitHub for my personal projects and I don't miss it a single nanosecond.
Not to mention Rad.
I have to hard disagree on that. I know of many developers personally who were on Source Forge and Google Code before and migrated to GitHub specifically because they offered git
Clear context. Write a sonet in Shakespeare style.
I don't think SVN and Mercurial were more widely used than git before Github became popular, but Github definitely killed off most of the use of those.
Git had already replaced perforce and svn most everywhere I'd seen, before GitHub came along. CVS was still horrible and in a lot, though.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another. StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
Lots of people were using svn and mercurial was also coming up around the time that GitHub launched. Both git and GitHub were superior to all the other options but for many people they did the switch to GitHub and git at the same time.
1 reply →
100% I remember asking fellow devs why they switched to git from svn/cvs whatever and the answer was - oh it can do branches. Ok, no more questions )
Git had amazing value and GitHub made it easy to access that value.
Of the thousands, a handful will prevail. Most of it is vaporware, just like in any boom. Every single industry has this problem; copy-cats, fakes & frauds.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
I'd bet that less people had their source code on git in 2008 than the number of developers using the various coding agents today. And the open-source project that we published today hooks into the existing workflow for those developers, in Claude Code and in Gemini CLI. Time will tell the rest. We will publish regular updates and you can judge us on those results.
At least for me, I have felt like the chat history in an agent is often times just as important and potentially even more important than the source code it generates. The code is merely the compiled result of my explanations of intent and goals. That is, the business logic and domain expertise is trapped in my brain, which isn't very scalable.
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
I wish you the best of luck with your startup.
I'm building an "improvement agent" that kinda embraces that. It starts out by running exploration across a codebase or set of documents and extract possible goals, and a vision from that. It then starts producing improvement plans (tickets, effectively). If it gets things wrong, I nudge it in the right direction, and it gets incorporated into revisions of the documents via a review stage. It's an experiment for now, but it is both doing semi-self-directed implementation and helping me identify where my thoughts haven't been crystallised enough by seeing where it fails to understand what I want.
I'm not just running it on code, but on my daily journal, and it produces actionable plans for building infrastructure to help me plan and execute better as a result.
Natural language is in fact a terrible way to express goals, it is imprecise, contradictory, subjective, full of redundancies and constantly changing. So possibly the worst format to record business rules and logic.
This lesson has been learned over and over (see AppleScript) but it seems people need to keep learning it.
We use simple programming languages composed of logic and maths not just to talk to the machine but to codify our thoughts within a strict internally consistent and deterministic system.
So in no sense are the vague imprecise instructions fed to LLMs the true source code.
2 replies →
It was 2.4% in 2008.
https://web.archive.org/web/20090531152951/http://www.survey...
You don't need a workflow. The agent is the workflow. That's the idea at least. Probably not a great idea IMHO, because producing high quality code is the main difficulty of programming. Everything else, committing to git, deploying etc, pale in comparison.
The hype is the product
> are we building tools for a workflow that actually exists, or are we building tools and hoping the workflow materializes?
You could ask that question about all the billions that went into crypto projects.
This is the irony: AI projects are comparable to crypto projects, but receiving 60M in seed-funding.
I do not think that's how it worked out for GitHub: I'd rather say that Git (as complex as it was to use) succeeded due to becoming the basis of GitHub (with simple, clean interface).
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
I was involved with bzr and Launchpad: anybody using pure Git hated it. GitHub, even with fewer features compared to LP, was pretty well regarded.
Yes, kernel and Linus used it, but he used a proprietary VCS before that did not go anywhere anyway, really.
> changing a file required a checkout
SVN didn't need checkouts to edit that I recall? Perforce had that kind of model.
2 replies →
Yes to all that. And GitLab the company was only founded in 2014 (OSS project started in 2011) and ran through YC in 2015, seven years after GitHub launched.
and most of those, except maybe gitlab, were clunky AF to use
The goal here is just to piggyback on the AI bandwagon, gather a lot of funding, create a product nobody understands but that sparks imagination, and sell it to FAANG.
Nobody cares if it makes sense, it just has to appear futuristic and avant-garde.
We’re building to milk the bitch while the hype is at the top. Anyone who seriously believes agents are capable of operating completely autonomously right now without any human supervision is delusional.
HN is full of AI agents hype posts. I am yet to see legitimate and functional agent orchestration solving real problems, whether it is scale or velocity.
> Entire, backed by a $60 million
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
I mean, pretty much all big startups begin as "niche" things that people might care about later. Tesla, Airbnb, Twitch... and countless failures too. It's just how the game is.
We are building tools and hoping an exit materializes. There’s so much funny money in AI right now, getting life-altering money seems easily attainable
the workflow exists
my code is 90% ai generated at this point
Only in HN comments will you get down voted for making a fair and scoped claim about your personal experience with AI.