← Back to context

Comment by wavemode

1 day ago

I take the opposite view. The semantics of visual programming are very simple compared to text-based programming. This makes it more amenable to AI autocomplete, not less. A visual IDE could just convert the blocks to text, then prompt the LLM API with "here are the current blocks, and here are the blocks available to you. Suggest some more blocks." and then insert the response.

I agree the abstraction level is perfect for AI, but not having it accessible (or even just in a format that is well represented in the training data) is a problem. From a CTO's perspective, encoding our processes in something like this removes my ability to leverage the general AI programming capabilities of the future that billions of dollars are currently being invested in.

It's a technical debt dead end.