← Back to context

Comment by alabhyajindal

19 hours ago

It's interesting but I don't think it belongs as a comment under this post. I can use LLMs to create something tangential for each project posted on HN, and so can everyone else. If we all started doing this then the comment section will quickly become useless and not on point.

Tangential would be “I wrote a Fibonacci function in this and it worked, just like it said on the tin!”

Compiling this to wasm and calling it from python as a sandboxed runtime isn’t tangential. I wouldn’t have know from reading the project’s readme that this was possible, and it’s a really interesting use case. We might as well get mad at simonw for using an IDE while he explored the limits of a new library.

Software system is released, comments talk about how to integrate it with other software systems. Seems on-topic.

but there is signal in what people are inspired to do upon seeing a new project-- why not simply evaluate the interestingness level of these sorts of mashups on their own terms? it actually feels very "hacker"-y to go out and show people possibilities like this. i have no particular comment on how "interesting" the derivative projects are in this case, but i have a feeling if his original post had been framed more like "i think it's super interesting how easy it is to use via FFI on various runtimes X & Y (oh btw in the spirit of full transparency: i used ai to help me. and you can see more details at <link>). especially because i think everyone who peruses HN with some regularity is likely to know of simon's work in at least some capacity, and i am-- speaking only for myself-- essentially always interested in any sort of project he embarks on, especially his llm-assisted experiments and stuff. but hey-- at the end of the day, all of this value judgment is realized as plainly as possible with +1 / -1 (up- and down-vote) and i guess it just is what it is. if number bad, HN no like. shrug.

  • I agree that there is signal, and that phrasing his original post as you pointed out would have been better.

    My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.

    • That’s true - the assumed time is different now. We have to judge it on the content/findings of the experiment, rather than the fact that someone experimented with it. I share your frustration with random GitHub repos though. Used to, if someone could create a new GitHub repository with a few commits, there was likely to be some intelligence or quality behind it, but I commonly stumble across vibe coded slop with AI-slop READMEs. So maybe you are describing a similar reaction here in HN posts.

Offtopic but I went to your website and saw that you created hackernews-mute and recently I was commenting about how one must have created such an extension and ranted about it. So kudos for you to have created it earlier on.

Maybe we HN users have minds in sync :)

https://news.ycombinator.com/item?id=46359396#46359695

Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!