Comment by order-matters

18 hours ago

well to start, im not sure what the magical new input would be that doesnt involve hands or voice, but for conversation lets assume its magic. the ideal is that i can use magic input to manage cloud compute through my phone, and that leaves me hands free to exercise and listen to music, or play video games, or hold a baby in a quiet room, and still manage tasks.

realistically what would be best is flexible inputs. voice input is often neutered by being only voice, or requiring a click activation and click to stop recording, is part of the trouble.

Getting a "magic" input is not as hard as it seems, if you reduce its input space as not needing to compete with keyboards and voice. a workflow could involve an assistant making suggestions and then the magic input needs only be a yes/no which could be head nod / shake, eye tracking & blinking, foot pedals, hand gestures

honestly though i think the real quality of life improvement is going to come from OS's enabling multiple focus windows to be active simultaneously with multiple input devices. like what i really want is a keyboard that can act as 10+ virtual keyboards with a way to change which one is in use, and then based on which one is in use the inputs go directly to an app that is in focus for that input. let my game controller stay inputting into the game while i type something, or toggle my voice input to talk to an AI and not transmit to discord while im doing that. or i just get two keywords and two mice and splitscreen games with m&k input with another person next to me to two instances of the game running (one on each of two monitors), or the mic recognizes my partners voice as a separate input from my voice and managed independently

there is so much that could be done with interface beyond layout and menus even regardless of AI, but AI related tools could help with mapping different voices into different virtual inputs, or recognizing keywords to do the same