I'm not understanding how this supports Tailscale's initiatives and mission. That isn't to say this isn't a useful feature for a business, but it feels like a random grasp at "build something, anything, AI related." As a paying customer I'm concerned about the company's focus being blurred when there are 3.8k open issues on their Github repo and my company has been tracking some particular issues for years without progress.
Corporate/enterprise networks have nightmarish setups for centralizing access to LLMs. This seems like an extremely natural direction for Tailscale; it is to LLM interfaces what Tailscale itself was to VPNs, a drastically simplified system that, by making policy legible, actually allows security teams to do the access control that was mostly aspirational under the status quo ante.
Seems straightforward?
I think if you don't have friends working at e.g. big banks or whatever, you might not grok just how nutty it is to try to run simple agent workflows.
Yeah I think it's better to think of Tailscale as an access control company which is utilizing networks as the utility vector, not a network utility company that also has access controls.
>Corporate/enterprise networks have nightmarish setups for centralizing access to LLMs.
As someone who is on the other side of the fence on this and trying to keep the network secure and preventing data exfiltration there could be a good reason for this. More often than not we have folks doing all kinds of crazy things and ignore what’s in the handbook. For example we had someone who didn’t like MFA for remote access and would use Tailscale to have a remote permanent reverse proxy to their homelab to do whatever work they were doing. What’s funny is that we are not BOFH’s and would have helped them setup whatever they need had they just sent us an email or opened a ticket.
Another reason they could have built this was by listening to their users. I do believe lots of people are spinning up agents in their workplaces, and managing yet another set of api keys is probably annoying for Tailscale's customers. This feels like a great solution to me.
Pressure to service larger customers to capture higher revenues is inevitable for Tailscale given the scale of VC funding, valuation, and operating costs involved.
Trying to be all things to all people will inevitably dilute focus, and it’s understandable that OP might be looking at this sub-product and wondering where the value is for their use cases.
They’re probably not the only ones questioning whether they’re still part of Tailscale’s core ICP (ideal customer profile), either.
I have a secret manager, why would I want tails ale involved in the management of secrets, they are a networking company
Tails ale is not a company I see being involved in my core AI ops. I don't need their visibility tools, I already have LGTM.
Tailscale should focus on their core competency, not chase the gilded Ai hype cycle. I have sufficient complaints about their core product that this effort is a red flag for me. To do this now, instead of years ago, shows how behind the times they are
There's a set of common needs across these gateways, and everyone is building their own proxies and reinventing the wheel, which just feels unnecessary.
~All of our customers at Oso (the launch partner in the article) have been asking us how to get a handle on this stuff...bc their CEO/board/whatever is asking them. So to us it was a no-brainer. (We're also Tailscale customers.)
I realised I wasn't Tailscale's target customer when I reported a 100% reproducible iOS bug/regression over a year ago. It was confirmed, logged, and forgotten.
There's actually a mass acquisition game going on right now in this space. Companies want to use genAI, but don't necessarily want to hire people to run their own models in-house. It may not be obvious to startup-y employees, but keeping data in-house is huge for big companies. LLM traffic is a lot different from established traffic that firewalls have been built up for. You can't block data leaks as easily as shutting down access to google drive. When you can't trust all of your employees, genAI presents a lot of new attack vectors.
This seems quite useful to me, especially for a larger org.
If your dev's are working on LLM features, they'll need access to the OpenAI APIs. So are you just gonna give all of them a key? the same key?
No idea how this is solved at the moment, so seems like a smart step
A huge chunk of the open issues are feature requests with many of those already being implemented years ago but not yet marked closed. And a vast majority of the bugs are repeats, they clearly need someone to clean up their issue tracker.
I like tailscale itself but a lot of basic stuff (such as dynamic routing) or ephemeral node auth are very lacking, wish they would concentrate more on their core product we all like and want to see improve
> my company has been tracking some particular issues for years without progress
Sounds like something your Account Manager or similar would need to work through. Development roadmaps are often driven by the largest, or loudest customers.
Not trying to diss or anything but a capable engineer could spin this up within their organization in a day or two. So I’m not sure how useful this is going to be to the average customer. Perhaps to the largest customers who have sophisticated security and compliance needs but even for them this would need to be very very competitively priced to be worthwhile (cheaper than the salary of 2 devs for a year).
The true moat of Tailscale is the core product. That can’t be easily replicated (still). Perhaps some product to simplify controlling what resources agents in the organization have access to and having 100% visibility + audatability for them will be way more useful.
I built a similar gateway for my own stack and thought it would be a quick project, but the complexity is hidden in the details. A basic proxy is simple enough, but getting accurate token counts for streaming responses turned out to be a huge pain since every provider handles chunks differently. You also end up spending a lot of time writing adapters to unify the schemas so your application logic stays clean. If you care about precise billing or logging, it is definitely not a two day build.
unrelated, but what's the path of least resistance to expose a couple of localhost-bound services to the tailnet, ideally with each having own hostname entry as the browser sees it?
they're not containerised, just plain old daemons.
This should work out of the box with Magic DNS (part of tailscale features). If machine A is named larrys-laptop and is running a service on :8080, then from sandras-laptop just navigate to http://larrys-laptop:8080 and it should work, provided both machines are on the same tailnet.
Tailscale services will do that. You can do the proxying with tailscale serve, services gives you the MagicDNS name and virtual IP address bound to it.
I'm not understanding how this supports Tailscale's initiatives and mission. That isn't to say this isn't a useful feature for a business, but it feels like a random grasp at "build something, anything, AI related." As a paying customer I'm concerned about the company's focus being blurred when there are 3.8k open issues on their Github repo and my company has been tracking some particular issues for years without progress.
Corporate/enterprise networks have nightmarish setups for centralizing access to LLMs. This seems like an extremely natural direction for Tailscale; it is to LLM interfaces what Tailscale itself was to VPNs, a drastically simplified system that, by making policy legible, actually allows security teams to do the access control that was mostly aspirational under the status quo ante.
Seems straightforward?
I think if you don't have friends working at e.g. big banks or whatever, you might not grok just how nutty it is to try to run simple agent workflows.
Yeah I think it's better to think of Tailscale as an access control company which is utilizing networks as the utility vector, not a network utility company that also has access controls.
>Corporate/enterprise networks have nightmarish setups for centralizing access to LLMs.
As someone who is on the other side of the fence on this and trying to keep the network secure and preventing data exfiltration there could be a good reason for this. More often than not we have folks doing all kinds of crazy things and ignore what’s in the handbook. For example we had someone who didn’t like MFA for remote access and would use Tailscale to have a remote permanent reverse proxy to their homelab to do whatever work they were doing. What’s funny is that we are not BOFH’s and would have helped them setup whatever they need had they just sent us an email or opened a ticket.
1 reply →
Another reason they could have built this was by listening to their users. I do believe lots of people are spinning up agents in their workplaces, and managing yet another set of api keys is probably annoying for Tailscale's customers. This feels like a great solution to me.
Pressure to service larger customers to capture higher revenues is inevitable for Tailscale given the scale of VC funding, valuation, and operating costs involved.
Trying to be all things to all people will inevitably dilute focus, and it’s understandable that OP might be looking at this sub-product and wondering where the value is for their use cases.
They’re probably not the only ones questioning whether they’re still part of Tailscale’s core ICP (ideal customer profile), either.
Edit: expanded ICP for clarity.
1 reply →
I have a secret manager, why would I want tails ale involved in the management of secrets, they are a networking company
Tails ale is not a company I see being involved in my core AI ops. I don't need their visibility tools, I already have LGTM.
Tailscale should focus on their core competency, not chase the gilded Ai hype cycle. I have sufficient complaints about their core product that this effort is a red flag for me. To do this now, instead of years ago, shows how behind the times they are
This ^^
There's a set of common needs across these gateways, and everyone is building their own proxies and reinventing the wheel, which just feels unnecessary.
~All of our customers at Oso (the launch partner in the article) have been asking us how to get a handle on this stuff...bc their CEO/board/whatever is asking them. So to us it was a no-brainer. (We're also Tailscale customers.)
I realised I wasn't Tailscale's target customer when I reported a 100% reproducible iOS bug/regression over a year ago. It was confirmed, logged, and forgotten.
There's actually a mass acquisition game going on right now in this space. Companies want to use genAI, but don't necessarily want to hire people to run their own models in-house. It may not be obvious to startup-y employees, but keeping data in-house is huge for big companies. LLM traffic is a lot different from established traffic that firewalls have been built up for. You can't block data leaks as easily as shutting down access to google drive. When you can't trust all of your employees, genAI presents a lot of new attack vectors.
In times of peace, the hardest part of running a military is keeping the troops busy.
This seems quite useful to me, especially for a larger org. If your dev's are working on LLM features, they'll need access to the OpenAI APIs. So are you just gonna give all of them a key? the same key?
No idea how this is solved at the moment, so seems like a smart step
A huge chunk of the open issues are feature requests with many of those already being implemented years ago but not yet marked closed. And a vast majority of the bugs are repeats, they clearly need someone to clean up their issue tracker.
Came to say this. It looks like a Mozilla move.
+1
I like tailscale itself but a lot of basic stuff (such as dynamic routing) or ephemeral node auth are very lacking, wish they would concentrate more on their core product we all like and want to see improve
> we all like
Building software users like doesn't make for a good business model. Especially if that model has to satisfy VC.
> my company has been tracking some particular issues for years without progress
Sounds like something your Account Manager or similar would need to work through. Development roadmaps are often driven by the largest, or loudest customers.
Not trying to diss or anything but a capable engineer could spin this up within their organization in a day or two. So I’m not sure how useful this is going to be to the average customer. Perhaps to the largest customers who have sophisticated security and compliance needs but even for them this would need to be very very competitively priced to be worthwhile (cheaper than the salary of 2 devs for a year).
The true moat of Tailscale is the core product. That can’t be easily replicated (still). Perhaps some product to simplify controlling what resources agents in the organization have access to and having 100% visibility + audatability for them will be way more useful.
I built a similar gateway for my own stack and thought it would be a quick project, but the complexity is hidden in the details. A basic proxy is simple enough, but getting accurate token counts for streaming responses turned out to be a huge pain since every provider handles chunks differently. You also end up spending a lot of time writing adapters to unify the schemas so your application logic stays clean. If you care about precise billing or logging, it is definitely not a two day build.
unrelated, but what's the path of least resistance to expose a couple of localhost-bound services to the tailnet, ideally with each having own hostname entry as the browser sees it?
they're not containerised, just plain old daemons.
This should work out of the box with Magic DNS (part of tailscale features). If machine A is named larrys-laptop and is running a service on :8080, then from sandras-laptop just navigate to http://larrys-laptop:8080 and it should work, provided both machines are on the same tailnet.
https://tailscale.com/kb/1552/tailscale-services
Tailscale services will do that. You can do the proxying with tailscale serve, services gives you the MagicDNS name and virtual IP address bound to it.
oh, that indeed worked. cheers!
Give Tailscale serve a shot (https://tailscale.com/kb/1312/serve).
*edited; I initially pointed to Funnel which would be used for sharing outside your tailnet.
Well there be cake?
Hop on the hype train before it crashes!