← Back to context

Comment by keymasta

2 years ago

Yeah, for Cree it is definitely more suspect than trustworthy. Another thing I noticed was that on another attempt I actually received different translations, so.. it's hard to say how this is going to be refined to be usable, or if it indeed is at all.

And wow, yes we are all alone on google results for those strings.

EDIT 1: Another thought occurs to me, if it's getting the transliteration right, and not the syllabics, maybe I seperate the tasks and go english -> transliteration -> syllabic. I will have to see if that approach works better.

Another idea might be to use that syllabics site to bring it from transliteration -> syllabic. I noticed that they were correct if translated there.

EDIT 2: By updating the system prompt I was able to get it to translate properly. I had to remind it to be correct!

  You are an expert in translating Cree. When translating you will include both the native writing system, and the romanization into the latin alphabet. When you romanize text you always include any accents or pronunciation marks. You use syllabics properly and in the modern usage

  Hello - ᑕᓂᓯ (Tânisi)
  Goodbye - ᐅᑲᕆ (Okaawii)
  Settings - ᐅᑌᕁ ᐟ (Otēw with Roman orthography)

> I had to remind it to be correct!

It's so funny to encounter the effects of language models producing the highest-probability completions of a prompt, and how those aren't necessarily the same as the most correct completions.

I also saw something like this with people asking GPT models to write poetry, and they wrote mediocre poetry. But then when asked to write good poetry, they wrote better poetry!

  • Yeah, I found that for that kind of use case you really wanted to remind it. You could even say things like,

      written beautifully with an intricate sense of wordplay
      in the style of [multiple good poets] 
    

    If you're in the chat interface you could even do:

      that was really great! But I want you to write it better!