I found a Vulnerability. They found a Lawyer

8 hours ago (dixken.de)

Three thoughts from someone with no expertise.

1) If you make legal disclosure too hard, the only way you will find out is via criminals.

2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.

3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.

  • In other industries there are professional engineers. People who have a legal accountability. I wonder if the CS world will move that way, especially with AI. Since those engineers are the ones who sign things off.

    For people unfamiliar, most engineers aren't professional engineers. There are more legal standards for your average engineer and they are legally obligated to push back against management when they think there's danger or ethics violations, but that's a high bar and very few ever get in legal trouble, only the most egregious cases. But professional engineers are the ones who check all the plans and the inspections. They're more like a supervisor. Someone who can look at the whole picture. And they get paid a lot more for their work but they're also essential to making sure things are safe. They also end up having a lot of power/authority, though at the cost of liability. Think like how in the military a doctor can overrule all others (I'm sure you've seen this in a movie). Your average military doctor or nurse can't do that but the senior ones can, though it's rare and very circumstantial.

    • You'd be surprised how many SE's would love for this to happen. The biggest reason, as you said, being able to push back.

      Having worked in low-level embedded systems that could be considered "system critical", it's a horrible feeling knowing what's in that code and having no actual recourse other than quitting (which I have done on few occasions because I did not want to be tied to that disaster waiting to happen).

      I actually started a legal framework and got some basic bills together (mostly wording) and presented this to many of my colleagues, all agreed it was needed and loved it, and a few lawyers said the bill/framework was sound .. even had some carve-outs for "mom-n-pops" and some other "obvious" things (like allowing for a transition into it).

      Why didn't I push it through? 2 reasons:

      1.) I'd likely be blackballed (if not outright killed) because "the powers that be" (e.g. large corp's in software) would absolutely -hate- this ... having actual accountability AND having to pay higher wages.

      2.) Doing what I wanted would require federal intervention, and the climate has not been ripe for new regulations, let alone governing bodies, in well over a decade.

      Hell, I even tried to get my PE in Software, but right as I was going to start the process, the PE for Software was removed from my state (and isn't likely to ever come back).

      I 100% agree we should have even a PE for Software, but it's not likely to happen any time soon because Software without accountability and regulation makes WAY too much money ... :(

      1 reply →

  • Regarding your 2), in other industries and engineering professions, the architect (or civil engineer, or electrical engineer) who signed off carries insurance, and often is licensed by the state.

    I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet, but I often wonder if we should require some sort of certification and insurance for large businesses sites that handle personal info or money. There'd be a Certified Professional Software Engineer that has to sign off on it, and thus maybe has the clout to push back on being forced to implement whatever dumb idea an MBA has to drive engagement or short-term sales.

    Maybe. Its not like its worked very well lately for Boeing or Volkswagen.

    •   > I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet
      

      FWIW there is no barrier like that for your physical engineers. Even though, as you note, professional engineers exist. Most engineers aren't professional engineers though, and that's why the barrier doesn't exist. We can probably follow a similar framing. I mean it is already more common for licensing to be attached to even random software and that's not true for the engineer's equivalents.

    • It's kinda wild that you don't need to be a professional engineer to store PII. The GDPR and other frameworks for PII usually do have a minimum size (in # of users) before they apply, which would help hobbyists. The same could apply for the licensure requirement.

      But also maybe hobbyists don't have any business storing PII at scale just like they have no business building public bridges or commercial aircraft.

      4 replies →

  • There are jurisdictions (and cultures) where truth is not an absolute defence against defamation. In other words, it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet. The nail that sticks out gets hammered down.

    Given that this is Malta in particular, the author probably wants to avoid going there for a bit. It's a country full of organized crime and corruption where people like him would end up with convenient accidents.

    •   > it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet.
      

      At least in the US there is a path of escalation. Usually if you have first contacted those who have authority over you then you're fine. There's exceptions in both directions; where you aren't fine or where you can skip that step. Government work is different. For example Snowden probably doesn't get whistleblower protection because he didn't first leak to Congress. It's arguable though but also IANAL

I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.

I guess I should feel lucky they didn't try to have me criminally prosecuted.

  • Same with me. I started to get spam from the email I used for a Portuguese airline. They didn't even respond.

Since the author is apparently afraid to name the organisation in question, it seems the legal threats have worked perfectly.

  • Or maybe in the diving community, "Maltese insurance company for divers" is about as subtle as "Bird-themed social network with blue checkmarks".

  • If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.

    There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.

> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.

Why sign anything at all? The company was obviously not interested in cooperation, but in domination.

Last year I found a vulnerability in a large annual event's ticket system, allowing me to download tickets from other users.

I had bought a ticket, which arrived as a link by email. The URL was something like example.com/tickets/[string]

The string was just the order number in base 64. The order number was, of course, sequential.

I emailed the organizer and the company that built the order system. They immediately fixed it... Just kidding. It's still wide open and I didn't hear anything from them.

I'm waiting for this year's edition. Maybe they'll have fixed it.

> The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the chilling effect.

Governments and companies talk a big game about how important cybersecurity is. I'd like to see some legislation to prevent companies and governments [1] behaving with unwarranted hostility to security researchers who are helping them.

[1] https://news.ycombinator.com/item?id=46814614

Incrementing user IDs and a default password for everyone — so the real vulnerability was assuming the company had any security to disclose to in the first place.

At this point 'responsible disclosure' just means 'giving a company a head start on hiring a lawyer before you go public.'

AFAIK, what this dude did - running a script which tries every password and actually accessing personal data of other people – is illegal in Germany. The reasoning is, just because a door of a car which is not yours is open you have no right to sit inside and start the motor. Even if you just want to honk the horn to inform the guy that he has left the door open.

https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...

  • I agree. You have to know when to stop.

    No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.

    Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.

  • Maybe the law should be changed then. The companies that have this level of disregard for security in 2026 are not going to change without either a good samaritan or a data breach.

    • He didn't have to crack the site. He could have reported up to that point.

      We need a change in law but more to do with fining security breaches or requiring certification to run a site above X number of users.

  • Hopefully no criminals turn up to do the illegal thing.

    • You don't need to retrieve other people's data to demonstrate the vulnerability.

      It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization

      If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice

      To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example

      1 reply →

> the portal used incrementing numeric user IDs

> every account was provisioned with a static default password

Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.

  • Literally found the same issue in a password system, on top of passwords being clear text in the database... cleared all passwords, expanded the db field to hold a longer hash (pw field was like 12 chars), setup "recover password" feature and emailed all users before End of Day.

    My own suggestion to anyone reading this... version your password hashing mechanics so you can upgrade hashing methods as needed in the future. I usually use "v{version}.{salt}.{hash}" where salt and the resulting hash are a base64 string of the salt and result. I could use multiple db fields for the same, but would rather not... I could also use JSON or some other wrapper, but feel the dot-separated base64 is good enough.

    I have had instances where hashing was indeed upgraded later, and a password was (re)hashed at login with the new encoding if the version changed... after a given time-frame, will notify users and wipe old passwords to require recovery process.

    FWIW, I really wish there were better guides for moderately good implementations of login/auth systems out there. Too many applications for things like SSO, etc just become a morass of complexity that isn't always necesssary. I did write a nice system for a former employer that is somewhat widely deployed... I tried to get permission to open-source it, but couldn't get buy in over "security concerns" (the irony). Maybe someday I'll make another one.

    • If you are needing to version your password hashes, then you are likely doing them incorrectly and not using a proper computationally-hard hashing algorithm.

      For example, with unsuitable algorithms like sha256, you get this, which doesn't have a version field:

          import hashlib; print(f"MD5:      {hashlib.md5(b'password').hexdigest()}")
          print(f"SHA-256:  {hashlib.sha256(b'password').hexdigest()}")
      
      
          MD5:      5f4dcc3b5aa765d61d8327deb882cf99
          SHA-256:  5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
      

      But if you use a proper password hash, then your hashing library will automatically take care of versioning your hash, and you can just treat it as an opaque blob:

          import argon2; print(f"Argon2:   {argon2.PasswordHasher().hash('password')}")
          import bcrypt; print(f"bcrypt:   {bcrypt.hashpw(b'password', bcrypt.gensalt()).decode()}")
          from passlib.hash import scrypt; print(f"scrypt:   {scrypt.hash('password')}")
      
      
          Argon2:   $argon2id$v=19$m=65536,t=3,p=4$LZ/H9PWV2UV3YTgF3Ixrig$aXEtfkmdCMXX46a0ZiE0XjKABfJSgCHA4HmtlJzautU
          bcrypt:   $2b$12$xqsibRw1wikgk9qhce0CGO9G7k7j2nfpxCmmasmUoGX4Rt0B5umuG
          scrypt:   $scrypt$ln=16,r=8,p=1$/V8rpRTCmDOGcA5hjPFeCw$6N1e9QmxuwqbPJb4NjpGib5FxxILGoXmUX90lCXKXD4
      

      This isn't a new thing, and as far as I'm aware, it's derived from the old apache htpasswd format (although no one else uses the leading colon)

          $ htpasswd -bnBC 10 "" password
          :$2y$10$Bh67PQAd4rqAkbFraTKZ/egfHdN392tyQ3I1U6VnjZhLoQLD3YzRe

    • Several web frameworks, including Rails, Laravel, and Symfony, will automatically upgrade password hashes if the algorithm or work factor has changed since the password was last hashed.

  • Years ago I worked for a company that bought another company. Our QA folks were asked to give their site a once-over. What they found is still the butt of jokes in my circle of friends/former coworkers.

    * account ids are numeric, and incrementing

    * included in the URL after login, e.g. ?account=123456

    * no authentication on requests after login

    So anybody moderately curious can just increment to account_id=123457 to access another account. And then try 123458. And then enumerate the space to see if there is anything interesting... :face-palm: :cold-sweat:

    • I did some work ~15 years ago for a consulting company. The company pushes their own custom opensource cms into most projects - built on top of mongodb and written by the ceo. He’s a lovely guy, and good coder. But he’s totally self taught at programming and he has blind spots a mile wide. And he hates having his blind spots pointed out. He came back from a react conference once thinking the react team invented functional programming.

      A friend at the company started poking around in the CMS. Turns out the login system worked by giving the user a cookie with the mongodb document id for the user they’re logged in as. Not signed or anything. Just the document id in plain text. Document IDs are (or at least were) mostly sequential, so you could just enumerate document IDs in your cookie to log in as anyone.

      The ceo told us it wasn’t actually a security vulnerability. Then insisted we didn’t need to assign a CVE or tell any of our customers and users. He didn’t want to fix the code. Then when pushed he wanted to slip a fix into the next version under the cover of night and not tell anyone. Preferably hidden in a big commit with lots of other stuff.

      It’s become a joke between us too. He gives self taught programmers a bad rep. These days whenever I hear a product was architected by someone who’s self taught, I always check how the login system works. It’s often enlightening.

When you are acting in good faith and the person/organization on the other end isn't, you aren't having a productive discussion or negotiation, just wasting your own time.

The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.

  • Agree - yet, security researchers and our wider community also needs to recognize that vulnerabilities are foreign to most non-technical users.

    Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.

    Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.

    It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.

    • > Such organizations don't know what to do.

      Maybe they should simply use some common sense? If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.

      If they would want to extort you, they would possibly do so early on. And maybe encrypt some data as a "proof of concept" ...

      But some organizations seem to think that their lawyers will remedy every failure and that's enough.

      1 reply →

  • cynical. worst part? best one can do in this situation. can't imagine how I could continue any further interaction with such organization.

If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

Here all databases with personal information must be registered there and data must be secure.

  • > If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

    They did. It's in the article. Search for 'CSIRT'. It's one of the key points of the story.

I suspect that the direction of these situations often depends on how your initial email is routed internally in these organizations. If they go to a lawyer first, you will get someone who tries to fix things with the application of the law. If it goes to an engineer first, you will get someone who tries to fix it with an application of engineering. If it were me, I would have avoided involving third party regulators in the initial contact at least.

  • > If it were me, I would have avoided involving third party regulators in the initial contact at least.

    I'm surprised to see this take only mentioned once in this thread. I think people here are not aware of the sheer amount of fraud in the "bug bounty" space. As soon as you have a public product you get at least 1 of these attempts per week of someone trying to shake you down for a disclosure that they'll disclose after you pay them something. Typically you just report them as spam and move on.

    But if I got one that had some credible evidence of them reporting me to a government agency already, I'd immediately get a lawyer to send a cease and desist.

    It seems like OP was trying to be a by the book law abiding citizen, but the sheer amount of fraud in this space makes it really hard to tell the difference from a cold email.

  • Yes, this routing is common. German energy company recommended by a climate organization had a somewhat similar vulnerability and no security contact, so I call them up and.. mhm, yes, okay, is that l-e-g-a-l-@-company-dot-de? You don't want me to just send it to the IT department that can fix it? Okay I see, they will put it through, yes, thank you, bye for now!

    Was a bit of a "oh god what am I getting into again" moment (also considering I don't speak legal-level German), but I knew they had nothing to stand on if they did file a complaint or court case so I followed through and they just thanked me for the report in the end and fixed it reasonably promptly. No stickers or maybe a discount as a customer, but oh well, no lawsuit either :)

    • In the early internet days, you could email root@company.com about a website bug, and somebody might reply.

I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.

I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?

By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

  • > I came across a pretty serious security concern at my company this week. The ramifications are alarming. […] Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

    I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).

    The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).

    • I wish I could recall the name of a pen test company I worked with when I wrote my auth system... They were pretty great and found several serious issues.

      At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.

      After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.

    • An older company I worked for went out of their way to find a pen tester that would basically rubberstamp everything and give them a pass. I actually uncovered major issues with the software during that process, to the point where it was unusable. Major components were severely out of date and open to attack. Other parts didn't even work as advertised. I didn't stick around much longer.

  • > By even flagging the issue and the potential fallout, I’ve put my career at risk.

    Simple as. Not your company? not your problem? Notify, move on.

    • I read that post as him talking about their company, in the sense of the company they were working for. If that was the case, then an exploit of an unfixed security issue could very much affect them either just as part of the company if the fallout is enough to massively harm business, or specifically if they had not properly documented their concerns so “we didn't know” could be the excuse from above and they could be blamed for not adequately communicating the problem.

      For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.

      [1] oh, the things I could do if it wasn't for conscience and empathy :)

    • Their websites says they're a freelance cloud architect.

      The article doesn't say exactly, but if they used their company e-mail account to send the e-mail it's difficult to argue it wasn't related to their business.

      They also put "I am offering" language in their e-mail which I'm sure triggered the lawyers into interpreting this a different way. Not a choice of words I would recommend using in a case like this.

  • > These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

    I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023

    It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.

    Put more concretely, couple vignettes:

    - Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."

    - A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

    • I've seen into some moderately high levels of "prestigious" business and government circles and I've yet to find any level at which everyone suddenly becomes as competent and sharp as I'd have expected them to be, as a child and young adult (before I saw what I've seen and learned that the norm is morons and liars running everything and operating terrifically dysfunctional organizations... everywhere, apparently, regardless how high up the hierarchy you go). And actually, not only is there no step at which they suddenly become so, people don't even seem to gradually tend to brighter or generally better, on average, as you move "upward"... at all! Or perhaps only weakly so.

      Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.

      1 reply →

    • > A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

      Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).

      4 replies →

    • I would get fired at Google within seconds then. I’m more than happy to shine a light on bullshit like that.

> vulnerability in the member portal of a major diving insurer

What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.

This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?

  • If anything it’s just a violation of industry expectations. You as a consumer just don’t need to use the product.

  • FWIW, some types of applications may be better served with encryption over hashing for password access. Email being one of them, given the varying ways to authenticate, it gets pretty funky to support. This is why in things like O365 you have a separate password issued for use with legacy email apps.

  • >whether this is something that should be reported to Apple, if Apple will do anything

    Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.

    If your expectation is they will do something about shitty coding practices half the App Store would be banned.

    • > Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off"

      Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.

      Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)

the NDA demand with a same-day deadline is such a classic move. makes it clear they were more worried about reputation than fixing anything.

  • Reply: "sorry, before reaching out to you I already notified a major media organization with a 90 day release notice"

    • In case someone takes this as actual advice, I think this comment is best accompanied with a warning that this gets them to call a lawyer for sure ^^'

      (OP mentions a lawyer in the title, but the post only speaks of a data protection officer, which is a very different role and doesn't even represent the organization's interests but, instead, the users', at least under GDPR where I'm from)

  • Typical shakedown tactic. I used to have a boss who would issue these ridiculous emails with lines like "you agree to respond within 24 hours else you forfeit (blah blah blah)"

Contacting the authorities led the company to hire lawyers— for communication with the data protection authority.

The lever lawyers have to “make it go away” is “law says so.” They’re not going to beg for mercy, they’re not going to invite you to coffee, no “bug bounty.” From their perspective if they arm-wrestle the researcher into an NDA, they patched the only known breach, retrospectively.

Perhaps it’s not prosocial or best practice, but you can clearly see how this went down from the company perspective, with a subject organization that has a tenuous grasp of cyber security concepts.

  • I think we should stop making excuses for shitty practices. I can understand why they might do it, i can also see there are much better ways to deal with this situation.

I found a vulnerability recently in a major online platform through HackerOne which could allow an attacker to cheaply DoS the service. I wrote up a detailed report (by hand) showing exactly how to reproduce and even explained exactly how a specially crafted request to a critical service took 10 seconds to get a response (just with a very simple, easy to reproduce example)... I then explained exactly how this vector could be scaled up to a DDoS...

They acknowledged it as a legitimate issue and marked my issue as 'useful info' but refused to pay me anything; they said that they would only pay if I physically demonstrate that it leads to a disruption of service; basically baiting me into doing something illegal! It was obvious from my description that this attack could easily be scaled up. I wasn't prepared to literally bring down the service to make my point. They didn't even offer the lowest tier of $200.

So bad. AI slop code is taking over the industry, vulnerabilities are popping up all over the place, so much so that companies are refusing to pay out bounties to humans. It's like neglect is being rewarded and diligence is being punished.

Then you read about how small the bug bounties are, even for established security researchers. It doesn't seem like a great industry. HackerOne seems like a honeypot to waste hackers' time. They reward a tiny number of hackers with big payouts to create PR to waste as many hackers' time as possible. Probably setting them up and collecting dirt on them behind the scenes. That's what it feels like at least.

  • This is sort of my issue with bug bounty programs: it can easily start to feel like extortion when a 'good samaritan' demands money. But they promised it to you by having a bug bounty program, then denied it. You feel rightfully cheated when the bug is legitimate, and doubly so when they acknowledge it. But demanding the money feels weird as well.

    I try to go into these things with zero expectations. Having a mediating party involved from the start is a bit like OP immediately CC'ing the CERT: extra legal steps in the disclosure process. Mediating parties are usually a pain to work with, and if it's deemed "out of scope" then they typically refuse to even notify the vulnerable party (or acknowledge to you that it hasn't been disclosed). I don't want a pay day, I just want them to fix their damn bug, but there's no way to report it besides through this middle person. Literally every time I've had to use a reporting procedure (like HackerOne) has resulted in tone-deaf responses from the company or complete gatekeeping. All of those bugs exist to this day. Every time I can email a human directly, it gets fixed, and in some occasions they send a thank-you like some swag and chocolates, a t-shirt, something

    Based on what I hear in the community, my HackerOne experiences have been outliers, but it might still be more effective (if you're not looking to collect bounty money) to talk to organizations directly where possible and avoid the ones that use HackerOne or another mediation party

One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey. Companies needs to feel fear that any vulnerability in their systems is going to be weaponized against them. Only then they will appreciate an email telling them about security issue which has not been exploited yet.

Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.

Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.

For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".

  • Instead of understanding all of this, and when it does or does not apply, it's probably better to disclose vulnerabilities anonymously over Tor. It's not worth the hassle of being forced to hire a lawyer, just to be a white hat.

    • Part of the motivation of reporting is clout and reputation. That sounds harsh or critical but for some folks their reputation directly impacts their livelihood. Sure the data controller doesn't care, but if you want to get hired or invited to conferences then the clout matters.

      1 reply →

This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.

This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.

  • I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.

    • Could you post on HN on that? Would be worth reading.

      And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?

      3 replies →

    • I'm curious to hear your take on the situation in the article.

      Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?

      2 replies →

  • > This sounds like a cultural mismatch with their lawyers.

    Note that the post never mentions lawyers, only the title. It sounds to me like chatgpt came up with two dozen titles and OP thought this was the most dramatic one. In the post, they mention it was a data protection officer who replied. This person has the user's interests as their goal and works for the organization only insofar as that they handle GDPR-related matters, including complaints. If I'm reading it right, they're supposed to be somewhat impartial per recital 97 of the GDPR: "data protection officers [...] should be in a position to perform their duties and tasks in an independent manner"

There should exist a vulnerability disclosure intermediary. They can function as a barrier to protect the scientist/researcher/enthousiast and do everything by the book for the different countries.

  • MSRC (Microsoft Security Response Center) — https://msrc.microsoft.com/

    They’ll close a report as “no action” if the issue isn’t related to Microsoft products. That said, in my experience they’ve been a reasonable intermediary for a few incidents I’ve reported involving government websites, especially where Microsoft software was part of the stack in some way.

    For example, I’ve reported issues in multiple countries where national ID numbers are sequential. Private companies like insurers, pension funds, and banks use those IDs to look up records, but some of them didn’t verify that the JSON Web Token (JWT) used for the session actually belonged to the person whose national ID was being queried. In practice, that meant an attacker could enumerate IDs and access other citizens’ financial and personal data.

    Reporting something like that directly to a government agency can be intimidating, so I reported it to Microsoft instead, since these organizations often use Azure AD B2C for customer authentication. The vulnerability itself wasn’t in Microsoft’s products, but MSRC’s reactive engineers still took ownership of triage and helped route it to the right contacts in those agencies through their existing partnerships.

  • National CERTs usually take up this role. I presume OP could have anonymously disclosed to the Maltese CERT, whom they already CC'd, though you'd have to check with them specifically to see if they offer that. Hackerspaces also often do this, especially if you're a member but probably also if not and they have faith that your actions were legal (best case, you can demonstrate exactly what you did, like by showing the script you ran, as OP could)

I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.

The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.

Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.

As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.

  • > we need strong legal protections for white-hat and even grey-hat security researchers or hackers.

    I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.

    I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.

> No ..., no ..., no .... Just ...

Am I the only one who can't stand this AI slop pattern?

  • Between that and 'Read that again' my heart kinda sank as I went. When if ever will this awful trend end?

  • It's one thing for your blog post to be full of faux writing style, but also that letter to the organization... oof. I wouldn't enjoy receiving that from someone who attached a script that dumps all users from my database and the email, as well as my access logs, confirm they ran it

I find these tales of lawyerly threats completley validate the hackers actions. They reported the bug to spur the company to resolve it. Their reaction all but confirms that reporting it to them directly would not have been productive. Their management lacks good stewardship. They are not thinking about their responsibility to their customers and employees.

I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.

So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.

So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.

  • If the government wasn't so famous for also locking people up that reported security issues I might agree, but boy they are actually worse.

    Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.

    The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.

  • That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.

    • This is not how CVEs work at all. You can be pretty vague when registering it. In fact they’re usually annoyingly so and some companies are known for copy and pasting random text into the fields that completely lead you astray when trying to patch diff.

      Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.

    • Yeah, something like that, nothing too much, just to exclude individual to deal with evil corps

  • Does it have to be a government? Why not a third party non-profit? The white hat gets shielded, and the non-profit has credible lawyers which makes suing them harder than individuals.

    The idea is to make it easier to fix the vulnerability than to sue to shut people up.

    For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.

    This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.

  • This would only work if governments and companies cared about fixing issues.

    Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.

Unless the company has a bug-bounty program, never ever tell them about vulnerabilities. You'll get ignored at best and have legal issues at worst. Instead, sell them on the black market. Or better yet, just give away for free if you don't care about money. That's how companies will eventually learn to at least have official vulnerability disclosure policy.

Maintaining Cybersecurity Insurance is a big deal in the US, I don't know about Europe. So vulnerability disclosure is problematic for data controllers because it threatens their insurance and premiums. Today much of enterprise security is attestation based and vulnerability disclosure potentially exposes companies to insurance fraud. If they stated that they maintained certain levels of security, and a disclosure demonstratively proves they do not, that is grounds for dropping a policy or even a lawsuit to reclaim paid funds.

So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.

  • It's not generally good financial advice to pay the overhead of an insurance company for costs you can easily pay yourself (also things like phone insurance, appliance warranty extensions, etc. won't make your device last longer and the insurer knows better than you what premium covers the average repair costs plus a profit margin). If you have a decent understanding of where the line is between vulnerability disclosure and criminal activities, fronting any court fees and a little bit of lawyer time (iff you can afford these out of pocket) until you're acquitted should be the better route, assuming anyone even ever takes you to court

  • Heh, what insurance company you use should be public information, and bug finders should report to them.

Malta has been mentioned? As a person living here I could say that workflow of the government here is bad. Same as in every other place I guess.

By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.

I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.

Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".

  • Being more careful is an option, or owning up to it and saying "hey I just did this and noticed this thing unexpectedly happened, apparently you have an XSS here" (or whatever it was). In most cases, the organization you're reporting to is happy about this up-front information, and in the exceptional situation where someone decides to take it to court, there's a clear paper trail (backed up by access and email logs) of what actions were taken and why, making it obvious you did nothing wrong

> No exploits, no buffer overflows, no zero-days. Just a login form, a number, and a default password that was set for each student on creation.

ai;dr

This is AI slop.

Use your own words!

I would rather read the original prompt!

  • Also in the email towards the organization. Makes it sound as condescending "let me dumb it down for you to key points" to the receiver of the email as, well, as LLMs are. Bit off-putting and the story itself is also common to the point of trite. Heck, nothing even ended up happening in this case. No lawyer is mentioned outside of the title, no police complaint was filed, no civil case started, just the three emails saying he should agree to not talk about this. Scary as those demands can be (I have been at the butt end of such things as well, and every time I wish I had used Tor instead of a CIOT-traceable IP address as soon as my "huh, that's odd system behavior"-senses go off. Responsible disclosure just gives you grey hairs in the 10% of cases that respond like this, even if so far 0% actually filed a police complaint or court case)

Wish they named them. Usually I don't recommend it. But the combination of:

A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience

Due to B), there's a strong responsibility rationale.

Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.

  • Dan Europe has a flow as discussed in the article and both the foundation and the regulated insurance branch is registered in Malta.

  • EU GDPR has very little enforcement. So while the regulation in theory prevents that, in practice you can just ignore it. If you're lucky a token fine comes up years down the line.

The same-day deadline on the NDA is the tell. If they had a real legal position, they wouldn't need a signature before close of business. That's a pressure tactic designed to work on someone who doesn't know any better. The fact that he pushed back and nothing happened confirms it was a bluff.

[flagged]

  • Not sure what the name of your complex is, maybe groveling deference to legalese? Whatever it is, I'm sure I would have applied it to your entire country of origin if I knew where you're from, and if I were developmentally around the age of twelve.

    He did everything exactly by the book and in the end was even nice enough to not publish the company's name, despite the legal threat being bullshit and him being entirely in the right.

[flagged]

  • How do you know? Some of the text has a slightly LLM-ish flavour to it (e.g. the numbered lists) but other than that I don’t see any solid evidence of that

    Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all

    • Not them but the formatting screams LLM to me. Random "bolding" (rendered on this website as blue text) of phrases, the heading layout, the lists at the end (bullet point followed by bolded text), common repeats of LLM-isms like "A. Not B". None of these alone prove it but combined they provide strong evidence.

      You can also see the format and pacing differs greatly from posts on their blog made before LLMs were mainstream, e.g. https://dixken.de/blog/monitoring-dremel-digilab-3d45

      While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.

      1 reply →

  • HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?

    • I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.

      I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.

  • What is the evidence that the content is entirely LLM generated, rather just LLM-assisted writing of a genuine story?

  • You know I had a thoughtful comment written in response to this that wouldn’t post because your comment got flagged to death when I tried to submit it!

    Your firebrand attitude is doing a disservice to everyone who takes vibe hunting vibecraft seriously!

    The intended audience doesn’t even care that this is LLM-assisted writing. Whether the narrative is affected by AI is second to the technical details. This is technical documentation communicated through a narrative, not personal narrative about someone’s experience with a technical problem. There’s a difference!

    What are you in this for?!

  • Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.

    I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.

    (my experience is roughly a decade in cybersecurity and risk management, ymmv)

    • The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.

      Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.

      3 replies →

  • I'm very sensitive to this but disagree vehemently.

    I saw one or two sigils (ex. a little eager to jump to lists)

    It certainly has real substance and detail.

    It's not, like, generic LinkedIn post quality.

    You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."

    I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.

    But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.

    Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.

Why does someone with a .de website insure their diving using some company based in Malta?

Based on this interaction, you have wonder what it's like to file a claim with them.

  • Divers Alert Network, which is probably the most well known dive membership (and insurance) org out there is registered in Malta in Europe.

  • Absolutely horrible according to DIVE TALK

    https://www.youtube.com/watch?v=O7NsjpiPK7o

    Insurance company would not cover a decompression chamber for someone who has severe decompression sickness, it is a life-threatening condition that requires immediate remediation.

    The idea that you possible neurological DCS and you must argue on the phone with an insurance rep about if you need to be life-flighted to the nearest chamber is just.... Mind blowing

  • It is probably among the standard forms required to participate in a diving class/excursion for travelers from other countries; and, Malta was probably chosen as the official HQ for legal or liability shelter reasons.