← Back to context

Comment by qeternity

9 days ago

I don't really follow: a lot of the fragility of web automation comes from the programmatic vs. visual differences, which VLMs are able to overcome. Skipping the graphical rendering seems to be committing yourself to non-visual hell.

The web isn't made for agents and automation. It's made for people.

Yes and no. Getting a VLM to work on the web would definitely be great, but it comes with its own problems, mainly around developing and acting on bounding boxes. We have vision as a default fallback for Stagehand, but we've found that the screenshot sent to the VLM often has to have pre-labeled elements on it. More notably, the screenshot with everything prelabeled leads to a cluttered and unusable image to process. Not pre-labeling runs the risk of missing important elements. I imagine a happy medium where the DOM+a11y tree can be used for candidate generation to a VLM.

Solely depending on a VLM is indeed reminiscent of how humans interact with the web, but when a model thrives with more data, why restrict the data sent to the model?