Comment by nilslindemann

2 days ago

AFAIK, what this dude did - running a script which tries every password and actually accessing personal data of other people – is illegal in Germany. The reasoning is, just because a door of a car which is not yours is open you have no right to sit inside and start the motor. Even if you just want to honk the horn to inform the guy that he has left the door open.

https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...

> running a script which tries every password

This isn't directly applicable to your point, but I need to correct this. They weren't guessing tons of passwords, they were were trying one password on a large number of accounts.

For clarification, here's the actual quote from the article describing the process:

> I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after.

No notion of a script, "every password" out of a set of a single default password may be open to interpretation, no mention of data downloads (the wording suggests otherwise), no mention of actual number of accesses (the text suggest a low number, as in "minimum access necessary to confirm the scope").

Still, some data was accessed, but we don't know to what extent and what this actually was, based on the information provided in the article. There's a point to be made about the extent of any confirmation of what seems to be a sound theory at a given moment. But, in order to determine whether this is about a stalled number generator or rather a systematic, predictable scheme, there's probably no way around a minimal test. We may still have a discussion, if a security alert should include dimensions like this (scope of vulnerability), or should be confined to a superficial observation only.

  • True, but the article also says:

    > That's it. No rate limiting. No account lockout.

    To me, if he confirmed that there’s no rate limiting on the auth API, this implies a scripted approach checking at least tens (if not more) of accounts in rapid succession.

    • Granted. I guess, unless it's applied very aggressively, assessing the existence of rate limiting may require some sort of automation (and probably some heuristics – how much data points do you actually need? do you have to retrieve any data at all, while looking for a single signal? The article doesn't tell.) Same goes for lockout.

      On the other hand, as mentioned already, all that's required is really looking for a return code and not for any data. Is accessing an API endpoint the same as retrieving data? Is there proof or evidence of intent of the latter? I guess, there remains much to be defined. Especially, if it's not so much about protecting reputation than it is about protecting data and ensuring trust, and the intent is to protect and secure this in the first place.

Maybe the law should be changed then. The companies that have this level of disregard for security in 2026 are not going to change without either a good samaritan or a data breach.

  • He didn't have to crack the site. He could have reported up to that point.

    We need a change in law but more to do with fining security breaches or requiring certification to run a site above X number of users.

Hopefully no criminals turn up to do the illegal thing.

  • You don't need to retrieve other people's data to demonstrate the vulnerability.

    It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization

    If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice

    To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example

    • > You don't need to retrieve other people's data to demonstrate the vulnerability.

      If you’re reporting to a nontechnical team…which sometimes you are…sometimes you do?

      15 replies →

> is illegal in Germany

Germany is not exactly well-known for having reasonable IT security laws

  • It's not necessarily just Germany. Lots of countries have laws that basically say "you cannot log in to systems that you (should) know you're not allowed to". Technical details such as "how difficult is the password to guess" and "how badly designed is the system at play" may be used in court to argue for or against the severity of the crime, but hacking people in general is pretty damn illegal.

    He also didn't need to run the script to try more than one or maybe two accounts to verify the problem. He dumped more database than he needed to and that's something the law doesn't particularly like.

    People don't like it when they find a well-intentioned lock specialist standing in their living room explaining they need better locks. Plenty of laws apply the same logic to digital "locksmiths".

    In reality, it's pretty improbable in most places for the police to bother with reports like these. There have been cases in Hungary where prestigious public projects and national operations were full of security holes with the researchers sued as a result, but that's closer to politics than it is to normal police operations.

    • And people wonder how the US can just turn off the electric grid of another country on demand...with laws like these, I expect there are local 6 year olds who can do the same.

    • The main problem I have this with real-world analogies we use for hacking is we assume that, like a home owner, these companies ultimately care about security and are in good-faith trying to make secure systems.

      They're not. They're malicious actors themselves. They will expose the absolute maximum amount of data they can with the absolute maximum amount of parties they can to make money. They will also collect the absolute maximum amount of data. Your screen is 1920 by 1080? Cool, record that, we can sell that.

      All the common sense practices we were taught in school about data security, they do the opposite. And, to top it off, they don't actually want to fix ANYTHING because doing so threatens their image, their ego, and potentially their bottom line.

where did they mention a script to try passwords? all accounts apparently have the same default password

I agree. You have to know when to stop.

No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.

Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.

This is exactly what I thought. The person did something illegal by accessing random accounts and no explanation makes this better. Could have asked his diving students for their consent, could have asked past students for their consent to access their accounts - but random accounts you cannot access.

Since this is a Maltese company I would assume different rules apply, but no clue how this is dealt with in Malta.

How the company reacted is bad, no question, but I can’t glance over the fact how the person did the initial „recon“.

It's illegal in the US, too. This is an incredibly stupid thing to do. You never, ever test on other people's accounts. Once you know about the vulnerability, you stop and report it.

Knowing the front door is unlocked does not mean you can go inside.

  • Don't comment on topics you know nothing about. Nothing this guy did is illegal in the US. Everything this guy did followed standard procedures for reporting security issues. The company apparently didn't understand anything about running a secure software operation and did everything wrong. And there in lies the problem. Without civil penalties for this type of bad behavior, then it will continue. In the US, a lawyer doing this would risk disbarment as this type of behavior dances on the edge of violating whistleblower laws.

    • I know exactly what I'm talking about, I'm a security engineer lol. Who has worked with plenty of lawyers.

      Yes, this is absolutely illegal. The CFAA is pretty fuzzy when it comes to vuln reporting but accessing other people's accounts without their permission is a line you don't cross. Having a badly secured site is usually not a crime, but hacking one is.

      Several jobs ago, some dumbass tested a bunch of API keys that people had accidentally committed on github and then "reported" the vulnerability to us.

      The in-house atty I was working with was furious and the guy narrowly avoided legal trouble. If he'd just emailed us about it, we'd've given him something.

      Also, whistleblower laws are for employees, not randos doing dumb shit online.