I don't work in cybersecurity and, after looking at the site's homepage, couldn't exactly figure out from all the buzzwords what exactly is this product. The most concerning takeaway from this article for me is that the maintainers of Huntress (whatever it is) can keep a log of, as well as personally access, the users' browser history, history of launched executables, device's hostname, and presumably a lot of other information. How is this product not a total security nightmare?
It's definitely not a product for an individual user. Controls like this are useful in certain arenas where you need total visibility of corporate devices. As with any highly privileged tool or service, compromise of it can be a big problem. That said, the goal with tools like this is to usually lock down and keep a close eye on company issued laptops and the like so you know when one gets stolen, hit by some malware, or somebody does things with it they aren't allowed to be doing (e.g. exfiltrating corp data, watching porn at work, running unauthorized executable, connecting to problematic networks, etc.).
As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
This makes sense, but in this case, isn't the company behind Huntress having direct access to this data still a problem? For example, if the government purchased Outlook licenses, I'd assume DoD can read clerks' emails, but Microsoft employees can't. I imagine worst case compromising a lot of Huntress' users is just a question of compromising of its developers, like one of the people in the authors section of this article.
It looks like Huntress is a "install this on your computer and we'll watch over your systems and keep you safe, for sure."
I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?
It's stated in the article: "The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent."
However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.
It´s also a lot of assumptions. This probably is an attacker - or wannabe at least. But you could be a student or researcher working on a cyber security course looking and for some projects your search flow would look a lot like this.
I was also frustrated by this. I got about 25% of the way in and was annoyed that they still did such a poor job of communicating what their product is. An advertorial like this can often save the "And that's why Our Product is so great, it can protect you from attacks like these!" for the end, but here, where the article is about how merely installing their product gives Huntress the company full access to everything you do, it leaves me with more questions than answers.
As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.
Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.
Their product is advertised as "Managed EDR". That usually means they employ a SOC that will review alerts and then triage and orchestrate responses accordingly. The use case here is when your IT manage chooses to deploy this and give them full visibility into your assets because your company wants to effectively outsource security response.
It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.
It pains me how this comment illustrates how ignorant most folks are of the consequences of installing software off the internet is (even technically inclined folks that hang out on HN). How many of us have non-security software installed on our computers today that do exactly these things... but sell the information? Definitely a non-zero number!
If folks understood this better, there would be less reason for software like Huntress' EDR to exist.
I don't think anyone is unfamiliar with the consequences of installing potential malware. I think people are surprised that a seemingly? legit company is going off and having a little pokeabout on arbitrary computers based on nothing more than a hostname match. Then sharing screenshots on HN. I guess they're Canadian but wow does this seem to have CFAA written all over it?
Thanks for the feedback on not understanding what we sell from the homepage. We sell an Endpoint Detection and Response (EDR) product that we manage with our 24/7 SOC. To perform the investigations on potentially malicious activity, we can fetch files from the endpoint and review them. We log all of this activity and make it available to our customers. We are an extension of their security team, which means they trust us with this access. We’ve been doing this for more than 10 years and have built up a pretty good reputation, but I can see how that would freak some folks out. We also sell to businesses, so this is something that would be installed on a work computer.
>We are an extension of their security team, which means they trust us with this access
So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.
Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"
Is it clear to users that their system is monitored and that they have consented to screengrabbing? Unless those screenshots were merely simulated from the Chrome history.
How was an individual user (in this article's case, a phishing sites developer) able to install your software and seemingly not notice the level of access they gave you to their computer?
Those things are what MTR/MDR solutions do. They track where you go and what processes are running and spawn other processes, etc. it allows tenants to see how an exploit progresses or stops, etc. these systems can also do web filtering for the tenant as well as keep logs as to what sessions get established and so on. That’s how these products work.
If you work for a company that's bigger than a mom and pop, chances are very good that your IT department has this same level of access to any computer used in the organization. Huntress is basically an outsourced portion of the IT department for smaller companies that don't have their own 24/7 security team. It's a pretty common thing, with many vendors offering this type of service. Your work computer may have a similar product/service installed
This makes total sense.. Except who is the SMB in this case? It sounds like the person just downloaded this off the Internet, it wasn't pre-installed by IT. So it sounds like Huntress has full and complete access to whoever downloads their software to try it out/demo it... and aren't afraid to use this access for their own purposes/just do a bit of poking around because why not? When a hostname matches?
Their customers are companies. Almost every company, of at least a certain size, has one or more security tools installed on every host in the organization; there are called Endpoint Detection & Response (EDR) tools. Some marquee products are SentinelOne and CrowdStrike Falcon, but there are dozens. Huntress makes their own security tool but operates it for their customers as a service, which is called Managed Detection & Response (MDR). Everything on this page is legit.
Disturbing that they would be proud enough of spying on their users to post this. Threat intelligence is nearly as bad as the threats themselves. From crowdstrike destroying computer systems to this type of spying on their own users, who wants to trust these people? What happened to holding microsoft accountable for the security of their products?
So many of the comments here seem to be completely unaware of what an EDR does. Do none of you all work for companies with managed devices? There isn't anything abnormal here...
I work on a REM team in a SOC for a big finance company all you US people know. An employee can't hardly fart in front of their corporate machine without us knowing about it. How do you all think managed cyber security works?
The point you seem to be (intentionally?) missing is that this wasn't installed by the IT staff in an org to an employee's computer, an user downloaded their software to trial it an they just happened to take a peek on all of his activity based on tenuous evidence at best.
They might be under the impression that all this activity is looked at by someone for curiosity’s sake -snooping. It isn’t. People only look and discover if there is reason (a critical alert or some legal action). No one goes snooping to see what sites Joe visited this morning for no reason at all.
For some value of 'spying', I guess. This is a product, as noted above, that say, a corporate IT dept. is installing in your company-issued laptop. Which means the customer, that is, not you, is okay with this behavior; it is what they are paying for.
It’s not that we’re spying on users for fun. We’re analyzing the browser history so determine if the history contains any sites that are associated with malicious activity. We definitely don’t care about your pr0n
I can rob people one at a time or I can go rob the bank. I can break into your clients one at a time or I can break into your "security" company.
Where is the product that keeps that data, your infrastructure safe? Why arent you selling that. Oh wait there is no such thing as it does not exist.
You are a compromise by a state level actor waiting to happen. In fact if you were compromised by a state level actor it is in your companies best interest to cover it up rather than disclose it (as that would be the end of your organization).
It's the fox guarding the hen house.
At some point were going to find out that a government, China, Russia, India.... used you, or one of your peers doing the same. This is taking off my shoes at the airport levels of stupid and ineffective.
I spend a fair bit of time talking to C-levels. The bulk of them use your services not because they think they are effective but because they know that they can point the finger at you when the shit hits the fan.
You're supposed to spy on an organization's users and machines for the benefit of the organization that has contracted you. That's not what you're doing here. You've taken an adversarial relationship with your (potential) customer, acting to harm them.
A lot of us are missing what actually happened here.
Some random person downloaded Huntress to try it out. Not a company. Not through IT. Just clicked "start trial" like you might with any software. Were they trying to figure out how to get around it? We have no idea!
Huntress employees then decided - based on a hostname that matched something in their private database - to watch everything this person did for three months. Their browser history, their work patterns, what tools they used, when they took breaks.
Then they published it.
The "but EDR needs these permissions!" comments are completely missing the point. Yeah, we know EDR is basically spyware. The issue is that Huntress engineers personally have access to trial user data and apparently just... browse it when they feel like it? Based on hostname matches???
Think about what they're saying: they run every trial signup against their threat intel database. If you match their criteria - which could be as weak as a hostname collision - their engineers start watching you. No warrant. No customer requesting it. No notification. Just "this looks interesting, let's see what they're up to."
Their ToS probably says something vague about "security monitoring" but I doubt it says "we reserve the right to extensively surveil individual trial users for months and publish the results if we think you're suspicious." And even if it did, that doesn't make it right or legal.
They got lucky this time - caught an actual attacker. But what about next time? What about the security researcher whose hostname happens to match? The pentester evaluating their product? Hell, what about corporate users whose hostname accidentally matches something in their database?
The fact that they thought publishing this was a good idea tells you a lot. This isn't some one-off investigation. This is apparently? how they operate.
Why would they NOT do this? They are a fucking cyber security company. It should be no surprise to anyone that a company that specializes in endpoint security software would be analyzing this shit non-stop, even for trial versions that users run. That's how their software works!
"Why wouldn't a locksmith make copies of all their customers' keys? They're a fucking locksmith company!"
Having technical capability doesn't create ethical permission.
The distinction between "can" and "should" is fundamental to data governance - a concept that exists precisely because unrestricted access to customer data, even for security purposes, creates massive ethical and legal problems.
Huntress didn't monitor a contracted customer's systems for that customer's benefit. They surveilled a trial user for three months based on a hostname match, then published the results. That's not "how their software works" - that's a choice about how to use the access their software provides.
If you genuinely can't see the difference between contracted security monitoring and opportunistic surveillance of trial users, you shouldn't be commenting on security practices at all, let alone so confidently.
> We knew this was an adversary, rather than a legitimate user, based on several telling clues. The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.
So in any other context, they probably wouldn't do any digging into the machine or user history, but they did this time because they already had high confidence of malicious use from this endpoint.
Cool insight into a (novice?) threat actor's operations and tooling. I personally knew nothing of "residential proxies" like LunaProxy so I learned something new
I personally would be careful about that sort of thing. I would imagine that few people would want to run a proxy on their home computer that can be accessed by others - and if they did, they'd probably have a specific reason for it, and thus would be looking for specific ways to make that proxy available to the people who they feel would want to use it.
So, I can only assume that a lot of residential machines that have proxies on them offered by companies like these have actually had those proxies installed by malware. The company themselves may not even be aware of this.
(I'm not saying that LunaProxy in particular is like this. I actually have never heard of LunaProxy before now, so the above may not even apply to it. Regardless, it's still worth applying caution.)
Reading through the article, the "hacker" was pretty naive and junior, installing an EDR on his hacking box. Or it was just a way to distract you guys ;)
I caught that feeling through the whole article. Like, was this user really that distracted or inept to forget he installed a Huntress trial, or was this all for some larger, more insidious reason, or distraction?
After thinking of it for a while, I do not think it is such a big issue. The threat actor was probably an adversary to existing huntress customers and the EDR probably reacted to his tooling and mistakes.
When doing red team engagements, we do the same, install same security solutions as the customer and work around it. It could be what happened here?
That the analysts spotted him and were able to connect it to existing cases is just good craftsmanship.
I no longer feel that it’s relevant to discuss a red line here. Huntress just did their job.
Having worked in the computer security world for many years and been completely on board with the "it's good to open source attack tools so that everyone knows what can be done", it's still sometimes hard not to feel like a useful idiot when I see attackers operating with big stacks of almost all open source tooling that are now mature and full featured enough to make almost any skid into a decently effective procurer and vendor of stolen information with a bit of effort.
I've been through 2 offensive courses (SANS GPEN and Parrot Labs Offensive Methodology and Analysis) and yeah, that was the take I got even back then (5+ years ago). Everything we used was open source and near-fully functional. There was a lot of knowledge needed on the syntax for some tools, but otherwise it was insane to think how easily these could be used by a motivated person.
For some of them, it makes sense. Metasploit, Cobalt Strike, and similar tools are good because they can be used to give people a good idea of the impact of the vulnerabilities in their system as well as giving them knowledge of the TTPs that attackers use.
But some of these, like Bloodhound are not really telling you much you didn't know. They are tools to make exploiting access, whether authorized or otherwise, easier and more automated. Hell, even in the case of Cobalt Strike, they are doing their best to limit who can obtain it and chasing down rogue copies because used for real attack purposes.
I'm not really saying anything should (or can) be done about this. Just ruminating about it, as after many years in the industry, seeing a list of a mostly open source stack used for every aspect of cybercrime sometimes surprises me at just how good a job we've done of equipping malicious actors. For all the high minded talk of making everyone more secure, a lot of things just seem to be done for a mixture of bragging rights ego and sharing things with each other to make our offensive sec job a bit easier.
While amusing it probably isn't particularly informative.
A person like that obviously has extremely poor operational security and is therefore of low competence.
Competent actors likely utilize virtualization or in cases where the software is adversarial and may reveal virtualization, physical machines (eg. cheap Mini PC's) with isolated and managed networks (eg. connections routed through a commercial VPN or a residential proxy) not under the control of the machine.
Also styxmmarket doesn't appear to be in any way a dark web marketplace/forum. It doesn't even have an onion address? It has a .com domain, something that should be easy for the authorities to seize. Probably is a honeypot of some kind.
A bunch of commenters are confused how this "blunder" even happened. I was too, except I recognized the company name. They have a history of making up or completely misunderstanding their own software. They make EDR products which trigger "events" except they don't really have the knowledge to triage them, so they come up with wild explanations for them that involve threat actors and anomalies which are not real. For example, earlier they posted this to their Twitter account: https://twitter.com/HuntressLabs/status/1865111713948852572
Anyone who knows anything about macOS knows that it is not possible to disable System Integrity Protection without rebooting into recovery (an environment that it is not possible to actually get events from). So their "detection" is just some random guy typing "csrutil disable" in their terminal and it doing absolutely nothing. I would not be surprised if there is some similar dumb explanation here that they missed, which would make for a substantially less interesting story.
I don't work in cybersecurity and, after looking at the site's homepage, couldn't exactly figure out from all the buzzwords what exactly is this product. The most concerning takeaway from this article for me is that the maintainers of Huntress (whatever it is) can keep a log of, as well as personally access, the users' browser history, history of launched executables, device's hostname, and presumably a lot of other information. How is this product not a total security nightmare?
It's definitely not a product for an individual user. Controls like this are useful in certain arenas where you need total visibility of corporate devices. As with any highly privileged tool or service, compromise of it can be a big problem. That said, the goal with tools like this is to usually lock down and keep a close eye on company issued laptops and the like so you know when one gets stolen, hit by some malware, or somebody does things with it they aren't allowed to be doing (e.g. exfiltrating corp data, watching porn at work, running unauthorized executable, connecting to problematic networks, etc.).
As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
This makes sense, but in this case, isn't the company behind Huntress having direct access to this data still a problem? For example, if the government purchased Outlook licenses, I'd assume DoD can read clerks' emails, but Microsoft employees can't. I imagine worst case compromising a lot of Huntress' users is just a question of compromising of its developers, like one of the people in the authors section of this article.
9 replies →
It looks like Huntress is a "install this on your computer and we'll watch over your systems and keep you safe, for sure."
I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?
It's stated in the article: "The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent."
However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.
5 replies →
I found that creepy too. Apparently `blunder == installing their software`
2 replies →
It´s also a lot of assumptions. This probably is an attacker - or wannabe at least. But you could be a student or researcher working on a cyber security course looking and for some projects your search flow would look a lot like this.
9 replies →
Well lets be real, you dont decide one day "today is the day we read one users entire history" and BLAMMO its a hacker! Lets keep reading!
Indeed, this article makes them look bad. Seems completely tone deaf to release this as a puff piece about the product.
9 replies →
I was also frustrated by this. I got about 25% of the way in and was annoyed that they still did such a poor job of communicating what their product is. An advertorial like this can often save the "And that's why Our Product is so great, it can protect you from attacks like these!" for the end, but here, where the article is about how merely installing their product gives Huntress the company full access to everything you do, it leaves me with more questions than answers.
As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.
Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.
Their product is advertised as "Managed EDR". That usually means they employ a SOC that will review alerts and then triage and orchestrate responses accordingly. The use case here is when your IT manage chooses to deploy this and give them full visibility into your assets because your company wants to effectively outsource security response.
It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.
It pains me how this comment illustrates how ignorant most folks are of the consequences of installing software off the internet is (even technically inclined folks that hang out on HN). How many of us have non-security software installed on our computers today that do exactly these things... but sell the information? Definitely a non-zero number!
If folks understood this better, there would be less reason for software like Huntress' EDR to exist.
I don't think anyone is unfamiliar with the consequences of installing potential malware. I think people are surprised that a seemingly? legit company is going off and having a little pokeabout on arbitrary computers based on nothing more than a hostname match. Then sharing screenshots on HN. I guess they're Canadian but wow does this seem to have CFAA written all over it?
3 replies →
[dead]
Thanks for the feedback on not understanding what we sell from the homepage. We sell an Endpoint Detection and Response (EDR) product that we manage with our 24/7 SOC. To perform the investigations on potentially malicious activity, we can fetch files from the endpoint and review them. We log all of this activity and make it available to our customers. We are an extension of their security team, which means they trust us with this access. We’ve been doing this for more than 10 years and have built up a pretty good reputation, but I can see how that would freak some folks out. We also sell to businesses, so this is something that would be installed on a work computer.
>We are an extension of their security team, which means they trust us with this access
So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.
Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"
Is it clear to users that their system is monitored and that they have consented to screengrabbing? Unless those screenshots were merely simulated from the Chrome history.
5 replies →
How was an individual user (in this article's case, a phishing sites developer) able to install your software and seemingly not notice the level of access they gave you to their computer?
7 replies →
If you work in any mid to large enterprise, there is a tool like this installed on your laptop.
It was put there by your security team.
Those things are what MTR/MDR solutions do. They track where you go and what processes are running and spawn other processes, etc. it allows tenants to see how an exploit progresses or stops, etc. these systems can also do web filtering for the tenant as well as keep logs as to what sessions get established and so on. That’s how these products work.
If you work for a company that's bigger than a mom and pop, chances are very good that your IT department has this same level of access to any computer used in the organization. Huntress is basically an outsourced portion of the IT department for smaller companies that don't have their own 24/7 security team. It's a pretty common thing, with many vendors offering this type of service. Your work computer may have a similar product/service installed
This makes total sense.. Except who is the SMB in this case? It sounds like the person just downloaded this off the Internet, it wasn't pre-installed by IT. So it sounds like Huntress has full and complete access to whoever downloads their software to try it out/demo it... and aren't afraid to use this access for their own purposes/just do a bit of poking around because why not? When a hostname matches?
4 replies →
Their customers are companies. Almost every company, of at least a certain size, has one or more security tools installed on every host in the organization; there are called Endpoint Detection & Response (EDR) tools. Some marquee products are SentinelOne and CrowdStrike Falcon, but there are dozens. Huntress makes their own security tool but operates it for their customers as a service, which is called Managed Detection & Response (MDR). Everything on this page is legit.
Huntress is a security company.
One of the tools they make is a Endpoint Detection and Response (EDR) product.
The kind of thing that goes on every laptop, server, and workstation in certain controlled environments (banks, government, etc.).
> couldn't exactly figure out from all the buzzwords what exactly is this product
I suspect this is deliberate.
Disturbing that they would be proud enough of spying on their users to post this. Threat intelligence is nearly as bad as the threats themselves. From crowdstrike destroying computer systems to this type of spying on their own users, who wants to trust these people? What happened to holding microsoft accountable for the security of their products?
So many of the comments here seem to be completely unaware of what an EDR does. Do none of you all work for companies with managed devices? There isn't anything abnormal here...
I work on a REM team in a SOC for a big finance company all you US people know. An employee can't hardly fart in front of their corporate machine without us knowing about it. How do you all think managed cyber security works?
The point you seem to be (intentionally?) missing is that this wasn't installed by the IT staff in an org to an employee's computer, an user downloaded their software to trial it an they just happened to take a peek on all of his activity based on tenuous evidence at best.
They might be under the impression that all this activity is looked at by someone for curiosity’s sake -snooping. It isn’t. People only look and discover if there is reason (a critical alert or some legal action). No one goes snooping to see what sites Joe visited this morning for no reason at all.
2 replies →
For some value of 'spying', I guess. This is a product, as noted above, that say, a corporate IT dept. is installing in your company-issued laptop. Which means the customer, that is, not you, is okay with this behavior; it is what they are paying for.
It’s not that we’re spying on users for fun. We’re analyzing the browser history so determine if the history contains any sites that are associated with malicious activity. We definitely don’t care about your pr0n
Jesus.
I can rob people one at a time or I can go rob the bank. I can break into your clients one at a time or I can break into your "security" company.
Where is the product that keeps that data, your infrastructure safe? Why arent you selling that. Oh wait there is no such thing as it does not exist.
You are a compromise by a state level actor waiting to happen. In fact if you were compromised by a state level actor it is in your companies best interest to cover it up rather than disclose it (as that would be the end of your organization).
It's the fox guarding the hen house.
At some point were going to find out that a government, China, Russia, India.... used you, or one of your peers doing the same. This is taking off my shoes at the airport levels of stupid and ineffective.
I spend a fair bit of time talking to C-levels. The bulk of them use your services not because they think they are effective but because they know that they can point the finger at you when the shit hits the fan.
But you are spying on users?
I wouldn’t lead with this in the marketing. It’s entirely disturbing.
You're supposed to spy on an organization's users and machines for the benefit of the organization that has contracted you. That's not what you're doing here. You've taken an adversarial relationship with your (potential) customer, acting to harm them.
Presumably legal, but morally gray.
A lot of us are missing what actually happened here.
Some random person downloaded Huntress to try it out. Not a company. Not through IT. Just clicked "start trial" like you might with any software. Were they trying to figure out how to get around it? We have no idea!
Huntress employees then decided - based on a hostname that matched something in their private database - to watch everything this person did for three months. Their browser history, their work patterns, what tools they used, when they took breaks.
Then they published it.
The "but EDR needs these permissions!" comments are completely missing the point. Yeah, we know EDR is basically spyware. The issue is that Huntress engineers personally have access to trial user data and apparently just... browse it when they feel like it? Based on hostname matches???
Think about what they're saying: they run every trial signup against their threat intel database. If you match their criteria - which could be as weak as a hostname collision - their engineers start watching you. No warrant. No customer requesting it. No notification. Just "this looks interesting, let's see what they're up to."
Their ToS probably says something vague about "security monitoring" but I doubt it says "we reserve the right to extensively surveil individual trial users for months and publish the results if we think you're suspicious." And even if it did, that doesn't make it right or legal.
They got lucky this time - caught an actual attacker. But what about next time? What about the security researcher whose hostname happens to match? The pentester evaluating their product? Hell, what about corporate users whose hostname accidentally matches something in their database?
The fact that they thought publishing this was a good idea tells you a lot. This isn't some one-off investigation. This is apparently? how they operate.
> caught an actual attacker. But what about next time?
What about the time before this where it wasn't an attacker, so they didn't write an article about it, and so we never found out about it?
Why would they NOT do this? They are a fucking cyber security company. It should be no surprise to anyone that a company that specializes in endpoint security software would be analyzing this shit non-stop, even for trial versions that users run. That's how their software works!
"Why wouldn't a locksmith make copies of all their customers' keys? They're a fucking locksmith company!"
Having technical capability doesn't create ethical permission.
The distinction between "can" and "should" is fundamental to data governance - a concept that exists precisely because unrestricted access to customer data, even for security purposes, creates massive ethical and legal problems.
Huntress didn't monitor a contracted customer's systems for that customer's benefit. They surveilled a trial user for three months based on a hostname match, then published the results. That's not "how their software works" - that's a choice about how to use the access their software provides.
If you genuinely can't see the difference between contracted security monitoring and opportunistic surveillance of trial users, you shouldn't be commenting on security practices at all, let alone so confidently.
I don't understand how "we actively spy on our customers and blog about it" is a viable marketing strategy..?
Presumably this
> We knew this was an adversary, rather than a legitimate user, based on several telling clues. The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.
So in any other context, they probably wouldn't do any digging into the machine or user history, but they did this time because they already had high confidence of malicious use from this endpoint.
The cybersecurity industry is dominated by companies who sell this as their "cyber threat intelligence" and "real-time protection" special sauce.
Cool insight into a (novice?) threat actor's operations and tooling. I personally knew nothing of "residential proxies" like LunaProxy so I learned something new
I personally would be careful about that sort of thing. I would imagine that few people would want to run a proxy on their home computer that can be accessed by others - and if they did, they'd probably have a specific reason for it, and thus would be looking for specific ways to make that proxy available to the people who they feel would want to use it.
So, I can only assume that a lot of residential machines that have proxies on them offered by companies like these have actually had those proxies installed by malware. The company themselves may not even be aware of this.
(I'm not saying that LunaProxy in particular is like this. I actually have never heard of LunaProxy before now, so the above may not even apply to it. Regardless, it's still worth applying caution.)
Reading through the article, the "hacker" was pretty naive and junior, installing an EDR on his hacking box. Or it was just a way to distract you guys ;)
I caught that feeling through the whole article. Like, was this user really that distracted or inept to forget he installed a Huntress trial, or was this all for some larger, more insidious reason, or distraction?
Ahh yes, 4D chess
Unrelated story; how politician gave us a look into their financial adventures.
I am curious where the red line is.
Any criminal activity or just behavior that the analysts find interesting?
After thinking of it for a while, I do not think it is such a big issue. The threat actor was probably an adversary to existing huntress customers and the EDR probably reacted to his tooling and mistakes.
When doing red team engagements, we do the same, install same security solutions as the customer and work around it. It could be what happened here?
That the analysts spotted him and were able to connect it to existing cases is just good craftsmanship.
I no longer feel that it’s relevant to discuss a red line here. Huntress just did their job.
Having worked in the computer security world for many years and been completely on board with the "it's good to open source attack tools so that everyone knows what can be done", it's still sometimes hard not to feel like a useful idiot when I see attackers operating with big stacks of almost all open source tooling that are now mature and full featured enough to make almost any skid into a decently effective procurer and vendor of stolen information with a bit of effort.
I've been through 2 offensive courses (SANS GPEN and Parrot Labs Offensive Methodology and Analysis) and yeah, that was the take I got even back then (5+ years ago). Everything we used was open source and near-fully functional. There was a lot of knowledge needed on the syntax for some tools, but otherwise it was insane to think how easily these could be used by a motivated person.
For some of them, it makes sense. Metasploit, Cobalt Strike, and similar tools are good because they can be used to give people a good idea of the impact of the vulnerabilities in their system as well as giving them knowledge of the TTPs that attackers use.
But some of these, like Bloodhound are not really telling you much you didn't know. They are tools to make exploiting access, whether authorized or otherwise, easier and more automated. Hell, even in the case of Cobalt Strike, they are doing their best to limit who can obtain it and chasing down rogue copies because used for real attack purposes.
I'm not really saying anything should (or can) be done about this. Just ruminating about it, as after many years in the industry, seeing a list of a mostly open source stack used for every aspect of cybercrime sometimes surprises me at just how good a job we've done of equipping malicious actors. For all the high minded talk of making everyone more secure, a lot of things just seem to be done for a mixture of bragging rights ego and sharing things with each other to make our offensive sec job a bit easier.
While amusing it probably isn't particularly informative.
A person like that obviously has extremely poor operational security and is therefore of low competence.
Competent actors likely utilize virtualization or in cases where the software is adversarial and may reveal virtualization, physical machines (eg. cheap Mini PC's) with isolated and managed networks (eg. connections routed through a commercial VPN or a residential proxy) not under the control of the machine.
Also styxmmarket doesn't appear to be in any way a dark web marketplace/forum. It doesn't even have an onion address? It has a .com domain, something that should be easy for the authorities to seize. Probably is a honeypot of some kind.
A bunch of commenters are confused how this "blunder" even happened. I was too, except I recognized the company name. They have a history of making up or completely misunderstanding their own software. They make EDR products which trigger "events" except they don't really have the knowledge to triage them, so they come up with wild explanations for them that involve threat actors and anomalies which are not real. For example, earlier they posted this to their Twitter account: https://twitter.com/HuntressLabs/status/1865111713948852572
Anyone who knows anything about macOS knows that it is not possible to disable System Integrity Protection without rebooting into recovery (an environment that it is not possible to actually get events from). So their "detection" is just some random guy typing "csrutil disable" in their terminal and it doing absolutely nothing. I would not be surprised if there is some similar dumb explanation here that they missed, which would make for a substantially less interesting story.
> Like most good stories, this one starts in the middle and works its way back and forth
Don't tell me your story is "good," let me read it and I'll be the judge of that.
[dead]