← Back to context

Comment by datpuz

10 months ago

Here's qwen-30b-a3b's response to your prompt when I worded it better:

The prompt was:

"Create a Python decorator that registers functions as handlers for MQTT topic patterns (including + and # wildcards). Internally, use a trie to store the topic patterns and match incoming topic strings to the correct handlers. Provide an example showing how to register multiple handlers and dispatch a message to the correct one based on an incoming topic."

https://pastebin.com/wefw7X2h

I went back and used your prompt, and it is still looping:

https://pastebin.com/VfmhCTFm

  • Are you using Ollama? If so, the issue may be Ollama's default context length: just 2,048 tokens. Ollama truncates the rest of the context silently, so "thinking" models cannot work with the default settings.

    If you are using Ollama, try explicitly setting the `num_ctx` parameter in your request to something higher like 16k or 32k, and then see if you still encounter the looping. I haven't run into that behavior once with this model.