Comment by abe94

5 days ago

i don't know - I agree we haven't seen changes to our built environment, but as for an "explosion of new start-ups, products" we sort of are seeing that?

I see new AI assisted products everyday, and a lot of them have real usage. Beyond the code-assistants/gen companies which are very real examples, here's an anecdote.

I was thinking of writing a new story, and found http://sudowrite.com/ via an ad, an ai assistant for helping you write, its already used by a ton of journalists and serious writers, and am trying it out.

Then i wanted to plan a trip - tried google but saw nothing useful, and then asked chatgpt and now have a clear plan

> I was thinking of writing a new story, and found http://sudowrite.com/ via an ad, an ai assistant for helping you write, its already used by a ton of journalists and serious writers, and am trying it out.

I am not seeing anything indicating it is actually used by a ton of journalists and serious writers. I highly doubt it is, the FAQ is also paper thin in as far as substance goes. I highly doubt they are training/hosting their own models yet I see only vague third party references in their privacy policy. Their pricing is less than transparent given that they don't really explain how their "credits" translate to actual usage. They blatantly advertise this to be for students, which is problematic in itself.

This ignores all the other issues around so heavily depending on LLMs for your writing. This is an interesting quirk for starters: https://www.theguardian.com/technology/2024/apr/16/techscape... . But there are many more issues about relying so heavily on LLM tools for writing.

So this example, to me, is actually exemplifying the issue of overselling capabilities while handwaving away any potential issues that is so prevalent in the AI space.

  • Hey co-founder of Sudowrite here. We indeed have thousands of writers paying for and using the platform. However, we aim to serve professional novelists, not journalists or students. We have some of both using it, but it's heavily designed and priced for novelists making a living off their work.

    We released our own fiction-specific model earlier this year - you can read more it at https://www.sudowrite.com/muse

    A much-improved version 1.5 came out today -- it's preferred 2-to-1 vs Claude in blind tests with our users.

    You're right on the faq -- alas, we've been very product-focused and haven't done the best job keeping the marketing site up to date. What questions do you wish we'd answer there?

    • > We indeed have thousands of writers paying for and using the platform. However, we aim to serve professional novelists, not journalists or students. We have some of both using it

      Your marketing material quotes a lot of journalists, giving the impression they too use it a lot. I have my reservations about LLMs being used for professional writing, but for the moment I'll assume that Muse handles a lot of those concerns perfectly. I'll try to focus on the more immediate and actual concerns.

      Your pricing specifically has a "Hobby & Student" section which mentions "Perfect for people who write for fun or for school". This is problematic to me, I'll get to why later when I answer you question about things missing from the FAQ.

      > What questions do you wish we'd answer there?

      Well it would be nice if you didn't hand wave away some actual potential issues. The FAQ also reads more like loose marketing copy than a policy document.

      - What languages does Sudowrite work in?

      Very vague answer here. Just be honest and say it highly depends on the amount of source material and that for many languages the result likely will be not that good.

      - Is this magic?

      Cute, but doesn't belong in a FAQ

      - Can Sudowrite plagiarize?

      You are doing various things here that are disingenuous.

      You basically talk around the issue by saying "well, next word prediction isn't exactly plagiarism". To me it strongly suggests the models used have been trained on material that you can plagiarize. Which in itself is already an issue.

      Then there is the blame shifting to the user saying that it is up to the user to plagiarize or not. Which is not honest, the user has no insights in the training material.

      "As long as your own writing is original, you'll get more original writing out of Sudowrite." This is a probabilistic statement, not a guarantee. It also, again is blame shifting.

      - Is this cheating?

      Way too generic. Which also brings me to you guys actively marketing to students. Which I feel is close to moral bankruptcy. Again, you sort of talk around the issue are basically saying "it isn't cheating as long as you don't use it to cheat". Which is technically true, but come on guys...

      In many contexts (academia, specific writing competitions, journalism), using something like sudowrite to generate or significantly augment text would be considered cheating or against guidelines, regardless of intent. In fact, in many school and academic settings using tools like these is detrimental to what they are trying to achieve by having the students write their own text from scratch without aid.

      - What public language models does Sudowrite use and how were they trained?

      Very vague, it also made me go back to the privacy policy you guys have in place. Given you clearly use multiple providers. I then noticed it was last updated in 2020? I highly doubt you guys have been around for that long, making me think it was copy pasted from elsewhere. This shows as it says "Policy was created with WebsitePolicies." which just makes this generic boilerplate. This honestly makes me wonder how much of it is abided by.

      It being so generic also means the privacy policy does not clearly mention these providers while effectively all data from users likely goes to them.

      *This is just about current questions in the FAQ*. The FAQ is oddly lacking in regards to muse, some of it is on the muse page itself. But that there I am running in similar issues

      - Ethically trained on fiction Muse is exclusively trained on a curated dataset with 100% informed consent from the authors.

      Bold and big claim. I applaud it if true, but there is no way to verify other than trusting your word.

      There is a lot more I could expand on. But to be frank, that is not my job. You are far from the only AI related service operating in this problematic way. It might even run deeper in general startup culture. But honestly, even if your service is awesome and ethically entirely sound I don't feel taken seriously by the publicly information you provide. It is almost if you are afraid to be real with customers, to me you are overselling and overhyping. Again, you are far from the only company doing so, you just happened to be brought up by the other user.

      2 replies →