Comment by comeondude

3 days ago

Im genuinely blown away by llms.

I’m an artist who’ve always struggled to learn how to code. I can pick up on computer science concepts, but when I try to sit down and write actual code my brain just pretends it doesn’t exist.

Over like 20 years, despite numerous attempts I could never get past few beginner exercises. I viscerally can’t stand the headspace that coding puts me in.

Last night I managed to build a custom CDN to deliver cool fonts to my site a la Google fonts, create a gorgeous site with custom code injected CSS and Java (while grokking most of it), and best part … it was FUN! I have never remotely done anything like that in my entire life, and with ChatGPT’s help I managed to it in like 3 hours. It’s bonkers.

AI is truly what you make of it, and I think it’s an incredible tool that allows you to learn things in a way that fits how your brain works.

I think schools should have curriculum that teaches people how to use AI effectively. It’s truly a force multiplier for creativity.

Computers haven’t felt this fun for a long time.

> "I'm an artist"

This is actually what I'm most excited about: in the reasonably near future, productivity will be related to who is most creative and who has the most interesting problems rather than who's spent the most hours behind a specific toolchain/compiler/language. Solutions to practical problems won't be required to go through a layer of software engineer. It's going to be amazing, and I'm going to be without a job.

  • > productivity will be related to who is most creative and who has the most interesting problems rather than who's spent the most hours behind a specific toolchain/compiler/language.

    Why stop at software? AI will do this to pretty much every discipline and artform, from music and painting, to law and medicine. Learning, mastery, expertise, and craftsmanship are obsolete; there's no need to expend 10,000 hours developing a skill when the AI has already spent billions of hours in the cloud training in its hyperbolic time chamber. Academia and advanced degrees are worthless; you can compress four years of study into a prompt the size of a tweet.

    The idea guy will become the most important role in the coming aeon of AI.

    • Also, since none of us will have any expertise at all anymore, everything our AI makes will look great. No more “experts” pooping our parties. It’s gonna be awesome!

  • Why would you be out of a job? Nothing he described is something that someone is being paid to do. Look at everything he needs just to match a fraction of your power.

    Consumer apps may see less sales as people opt to just clone an app using AI for their own personal use, customized for their preferences.

    But there’s a lot of engineering being done out there that people don’t even know exists, and that has to be done by people who know exactly what they’re doing, not just weekend warriors shouting stuff at an LLM.

    • > But there’s a lot of engineering being done out there that people don’t even know exists, and that has to be done by people who know exactly what they’re doing, not just weekend warriors shouting stuff at an LLM.

      Except that the expectation of them is going to be higher and higher possibly with a downward pressure on compensation. I guess it'll find equilibrium somewhere eventually.

  • So your understanding is that the chief value a software engineer provides is experience utilizing a specific toolchain/compiler/language to generate code. Is that correct?

I think much of HN has a blind spot that prevents them from engaging with the facts.

Yes, AI currently has limitations and isn't a panacea for cognitive tasks. But in many specific use cases it is enormously useful, and the rapid growth of ChatGPT, AI startups, etc. is evidence of that. Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real, same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.

I would trust many peoples' evaluations on the impacts of AI if they could at least engage with reality first.

  • Some people see it as a political identity issue.

    One person told me the other day that for the rest of time people will see using an AI as equivalent to crossing a picket line.

    • Well, if you use LLMs to write a book and claim to be a writer that is crossing a line. A writer doesn't just write the book, they do a lot of thinking over the concepts of the book and their writing style is somewhat unique to themselves. It's deceiving to prentend you're not using AI when you are. And if you are that's fine as long as you disclose it.

  • To me the progress achieved so far has been overhyped in many respects. The numbers out of Google that 25% of the code being generated is AI or some high number like that? BS. It’s gamified statistics that look at the command completion (not AI trying to solve a problem) vs what’s accepted and it’s likely hyper inflated even then.

    It works better than you for UI prototypes when you don’t know how to do UI (and maybe even faster even if you do). It doesn’t work at all on problems it hasn’t seen. I literally just saw a coworker staring at code for hours and getting completely off track trying to correct AI output vs stepping through the problem step by step using how we thought the algorithm should work.

    There’s a very real difference between where it could be in the future to be useful vs what you can do with it today in a useful way and you have to be very careful about utilizing it correctly. If you don’t know what you’re doing and AI helps you get it done cool, but also keep in mind that you also won’t know if it has catastrophic bugs because you don’t understand the problem and the conceptual idea of the solution well enough to know if what it did is correct. For most people there’s not much difference but for those of us who care it’s a huge problem.

  • I'm not sure if this post is ragebait or not but I'll bite...

    If anything, HN is in general very much on the LLM hype train. The contrarian takes tend to be from more experienced folks working on difficult problems that very much see the fundamental flaws in how we're talking about AI.

    > Many will argue that it's all fake, that it's all artificial hype to prop of VC evaluations, etc. They literally will see the billions in revenue as not real

    That's not what people are saying. They're noting that revenue is meaningless in the absence of looking at cost. And it's true, investor money is propping up extremely costly ventures in AI. These services operate at a substantial loss. The only way they can hope to survive is through promising future pricing power by promising they can one day (the proverbial next week) replace human labor.

    > same with all the real people upskilled via LLM's in ways that are entirely unique to the utility of AI.

    Again, no one really denies that LLMs can be useful in learning.

    This all feels like a strawman-- it's important to approach these topics with nuance.

    • I was talking to a friend today about where AI would actually be useful in my personal life, but it would require much higher reliability.

      This is very basic stuff, not rewriting a codebase, creating a video game from text prompt or generating imagery.

      Simply - I would like to be able to verbally prompt my phone something like "make sure the lights and AC are set to I will be comfortable when I get home, follow up with that plumber if they haven't gotten back to us, place my usual grocery order plus add some berries plus anything my wife put on our shared grocery list, and schedule a haircut for the end of next week some time after 5pm".

      Basically 15-30min of daily stupid personal time sucks that can all be accomplished via smartphone.

      Given the promise of IoT, smart home, LLMs, voice assistants, etc.. this should be possible.

      This would require it having access to my calendar, location, ability to navigate apps on my phone, read/send email/text, and spend money. Given the current state of the tools, even if there is a 0.1% chance it changes my contact card photo to Hitler, replies to an email from my boss with an insult, purchases $100,000 in bananas, or sets the thermostats to 99F.. then I couldn't imagine giving an LLM access to all those things.

      Are we 3 months, 5 years, or never away from that being achievable? These feel like the kind of things previous voice assistants promised 10 years ago.

  • because the preachers preach how amazing it is on their greenfield 'i built a todo list app in 5 minutes from scratch' and then you use it on an established codebase with a bigger context than the llm could ever possibly consume and spend 5x more time debugging the slop than it would've taken you to do the task yourself, and you become jaded

    stop underestimating the amount of internalized knowledge people can have about projects in the real world, it's so annoying.

    an llm can't ever possibly get close to it. there's some guy in a team in another building who knows why a certain weird piece of critical business logic was put there 6 years ago, the llm will never know this and won't understand this even if it consumed the whole repository because it would have to work there for years to understand how the business works

    • A completely non-technical saleslady on our team prototyped a whole JS web app that generated some data based on some user inputs (and even generated PDFs), which solved a problem our customers were having and our devs didnt have the time to develop yet.

      This obviously was a temporary tool we'd never let touch our github repo but it still very much worked and solved a niche problem. It even looked like our app because the LLM could consume screenshots to copy our designs.

      I'm on board with vibe coding = non-maintainable, non-tested, mostly useless code by non-devs. But the plus side it will expose many many people to learn basic programming and fill many tiny gaps not solved by bigger more serious pieces of code. Especially once people start building infrastructure and tooling around these non-devs, like hosting, deployment, webhook integrations, etc.

      3 replies →

    • But that’s not good. You don’t want Bob to be the gate keeper for why a process is the way it is.

      In my experience working with agents helps eliminate that crap, because you have to bring the agent along as it reads your code (or process or whatever) for it to be effective. Just like human co-workers need to be brought along, so it’s not all on poor Bob.

    • totally. Especially when i'm debugging something for colleagues or friends, given a domain and a handful of ways of doing it, if i'm familiar with it I generally already get a sense of the reasons why its failing or falling short. This has nothing to do with the code base, any given language not withstanding. It comes from years and decades of systems and idiosyncratic behaviors of systems and examples which strangely rear their heads in notable ways.

      Theese notable ways may also not be commonly known or put into words, but they persist nevertheless.

This post is about a specific, complex system that stretches from operating system to the physical world, as well as some philosophical problems.

What you're describing is a dead simple hobby project that could be completed by a complete novice in less than a week before the advent of LLMs.

It's like saying "I'm absolutely blown away by microwaves, I can have a meal hot and ready in just a few minutes with no effort or understanding. I think all culinary schools should have a curriculum that teaches people how to use microwaves effectively."

Maybe the goal of education should be giving people a foundation that they can build on, not make them an expert in something that a low skill ceiling and diminishing returns.

  • I mean - going from “doable in a week with the right mindset” to “doable in a day when I’ve struggled before” is uhhh kinda worth being blown away by

> a gorgeous site with custom code injected CSS and Java (while grokking most of it)

From the context, it's not Java, but Javascript.

  • This was the most surprising/disturbing/enlightening part of the post imo. Surpring; this person literally had no clue! Disturbing; this person literally had no clue? Enlightening; this person literally did not need a clue.

    My takeaway as an AI skeptic is AI as human augmentation may really have potential?

    • Just a vision and goal.

      I feel like, AI makes learning way more accessible, at least it did for me, where it evoked a childlike sense of curiosity and joy for learning new things.

      I’m also working on a Trading Card Game, where I feed it my drawings and it renders it into final polished form based on visual style that I spent some time building in chat GPT. It’s like an amplifier / accelerator.

      I feel like, yes while it can augment us, at the end day it depends on our desire to grow and learn. Otherwise, you will end up with same result as everybody else.

I enjoy the fun attitude, I had a family member state something similar, but I always warn: with powerful AI comes serious consequences. Are we ready for those consequences? How far do we want AI to reach into our lives and livelihood?

  • We managed to survive the nuke and environmental lead (two examples of humanity veering into drastically wrong directions)

    we are never ready for seismic changes. But we will have to adapt one way or another, might as well find a good use for it and develop awareness as a child would around handling knives.

  • We cannot predict what the consequences will be, but, as a species, we are pretty good at navigating upheavals, opportunities. There are no guarantees that human ingenuity is likely to always save the day, but the fact evolution has bestowed us with risk taking, curiosity, so we won’t stop.

    Enjoy the ride.

    • No, we are not good at that. We have dire warnings of extreme catastrophes heading our way for decades now and instead of fixing what is broken, we collectively decide to race to our extinction faster.

      1 reply →

I do love the approachability they create. Personally as a coder, I don't find the AI experience at all fun, but I get that others do.

Have you tried using AI to further changes to any of these projects down the line?

  • Makes sense. I know what I built is nowhere near actual software development. Still I was able to quickly learn how things work through GPT.

    Since I’ve literally been working on this project for two days, here’s a somewhat related answer to your question: I’ve been using chat gpt to build art for TCG. Initially I was resistant and upset at AI companies were hoovering up people’s work wholesale for training data (which is why I think now is an excellent time to have serious conversation about UBI, but I digress).

    But I finally realized that I could develop my own distinctive 3D visual style by feeding GPT my drawings and having it iterate in interesting directions. It’s fun to refine the style, by having GPT simulate actual camera lens and lighting set up.

    But yes I’ve used AI to make numerous stylistic tweaks to my site, including building out a tagging system that allows me to customize the look of individual pages when I write a post)

    Hope I’ll be able to learn how to build an actual complex app one day, or games.

Genuine question, how would you feel about reading the dual to your comment here?

"I'm a computer scientist who's always struggled to learn how to paint." "Last night I managed to create a gorgeous illustration with Stable Diffusion, and best part ... it was FUN!" "Art hasn't felt this fun for a long time."

Interesting, CDNs require a lot of infrastructure to be effective, what did you (or the LLM) use to set that up?

  • Maybe CDN isn’t the right term after all, see I’m not a software engineer!

    But, basically I wanted a way to have a custom repository of fonts a la Google Fonts (found their selection kinda boring) that I could pull from.

    Ran fonts through transfonter to convert them to .woff2, set up a GitHub repository (which is not designed for people like me), and set up an instance on Netlify, then wrote custom CSS tags for my ghost.org site.

    The thing that amazes me is that aside from my vague whiff of GitHub, I had absolutely no idea how to do this. Zilch. Nada. Chat GPT gave me a clear step by step plan, and exposed me to Netlify, how to write CSS injections, how ghost.org tagging works from styling side of things. And I’m able to have back and forth dialogue with it, not only to figure out how to do it, but understand how it works.

    • Sounds more like a Continuous Integration / Continuous Deployment (CI/CD) pipeline - defined as a set of practices that automate the process of building, testing and deploying software. Or rather, fonts in this case.

      A Content Delivery Network (CDN) is a collection of geographically scattered servers that speeds up delivery of web content by being closer to users. Most video/image services use CDNs to efficiently serve up content to users around the world. For example, someone watching Netflix in California will connect to a different server than someone watching the same show in London.

      1 reply →

    • Yes, it's an ever patient teacher that's willing to chop up any subject matter into bits that are just the correct shape and size for every brain, for as long and as deep as you're willing to go. That's definitely one effective way to use it.

  • CDNs are usually one of the more expensive things you can build, there's a reason most are run by large public companies.

    Those are probably near the top of the list of things you don't want to blindly trust an LLM with building.

Nice! That's what we all want, but without the content reapropriation, surveillance and public discourse meddling.

Think of a version that is even more fun, won't teach your kids wrong stuff, won't need a datacenter full of expensive chips and won't hit the news with sensacionalist headlines.