Comment by johnbarron
21 hours ago
>> I'm amazed that wasn't taken into account!
This was taken into account: https://news.ycombinator.com/item?id=47563392
21 hours ago
>> I'm amazed that wasn't taken into account!
This was taken into account: https://news.ycombinator.com/item?id=47563392
You found a paper saying that contamination is possible. That doesn’t mean that most of these plastic studies are doing the necessary controls, let alone the (almost impossible) task of preventing the contamination in a laboratory setting where nanomolar detection levels are used to make broad claims.
Are more “controls” what is necessary here? The problem wasn’t plastic contamination, it was the presence of stearates. Distinguishing between stearates and microplastics sounds like a classification problem, not a control problem.
There is practically universal recognition among microplastics researchers that contamination is possible and that strong quality controls are needed, and to be transparent and reproducible, they have a habit of documenting their methodology. Many papers and discussions suggest avoiding all plastics as part of the methodology, e.g. “Do’s and don’ts of microplastic research: a comprehensive guide” https://www.oaepublish.com/articles/wecn.2023.61
Another thing to consider is that papers generally compare against baseline/control samples, and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.
Many papers in this field are missing obvious controls, but you’re correct that controls alone are insufficient to solve this problem.
When you are taking measurements at the detection limit of any molecule that is widespread in the environment, you are going to have a difficult time of distinguishing signal from background. This requires sampling and replication and rigorous application of statistical inference.
> Another thing to consider is that papers generally compare against baseline/control samples,
Right, that’s what a control is.
> and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.
There’s no such thing as “overestimating in baseline samples”, unless you’re just doing a different measurement entirely.
What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant. This is a feature, not a bug.
3 replies →
Any scientific paper that does not document how things were done (methodologies) is basically worthless in the search for truth.
1 reply →
Luckily HN software developers, the foremost authority on literally every subject imaginable, are here to bless the world with their insights.
I think there's an important distinction of smug better-knowing instances.
"I have unique insight as a non-expert that all experts miss and the entire field is blind to" -> usually nonsense
"I think in this specific instance academically qualified people are missing something that's obvious to me" -> often true.
4 replies →
Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why all programmers irresponsibly write memory unsafe code given it has a global impact.
Been here 16 years, it's always an adventure seeing whether stuff like this falls into:
A) Polite interest that doesn't turn into self-keyword-association
B) Science journalism bad
C) Can you believe no one else knows what they're doing.
(A) almost never happens, has to avoid being top 10 on front page and/or be early morning/late night for North America and Europe. (i.e. most of the audience)
(B) is reserved for physics and math.
(C) is default leftover.
Weekends are horrible because you'll get a "harshin' the vibe" penalty if you push back at all. People will pick at your link but not the main one and treat you like you're argumentative. (i.e. 'you're taking things too seriously' but a thoughtful person's version)
5 replies →
You joke, but given that SWE/AI researchers literally invented AI that does everything else for them and is often super-human at intelligence across most things, I would unironically prefer the opinion of the creator of such a system over most others for most things.
7 replies →
Not OP, but:
> "You found a paper"
johnbarron didn't find it. The authors cited it as foundational to their own work. it's ref. 38 in the paper under discussion. From the paper: "this finding had not been reported in the MP literature until 2020, when Witzig et al. reported that laboratory gloves submerged in water leached residues that were misidentified as polyethylene."[1]
> "most of these plastic studies are [not] doing the necessary controls"
which studies? The paper they linked surveys 26 QA/QC review articles[1]. Seems well understood.
> "a laboratory setting where nanomolar detection levels are used to make broad claims"
This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)
> "(almost impossible) task of preventing the contamination"
The paper provides open-access spectral libraries and conformal prediction workflows to identify and subtract stearate false positives from existing datasets[1]. Prevention isn't the strategy. Correction is. That's the entire point of the paper they linked and the follow-up in [2]
[1] https://pubs.rsc.org/en/content/articlehtml/2026/ay/d5ay0180...
[2] https://news.umich.edu/nitrile-and-latex-gloves-may-cause-ov...
> This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)
This paper used “light-based spectroscopy” [1]. Many others use methods that depend on gas chromatography or NMR. A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar), which they credulously scaled up by some huge factor, and then made idiotic claims about plastic spoons in brains.
Relatively little quantitative science in this area depends on counting plastic particles in microscopic images, but it’s what gets headlines, because laypeople understand pictures.
[1] as an aside, the choice of terminology here is noteworthy. A simple visual light absorption spectra is also “light based spectroscopy”, but is measuring the aggregate response of a sample of a heterogeneous mixture, and is conventionally converted to molar equivalents via some sort of calibration curve (otherwise you can’t conclude anything). But there could be other approaches that are closer to microscopy, which they also discuss. “Particles per square millimeter” is also a unit of concentration (albeit a shitty one, unless your particles are of uniform mass).
Anyway, the point is that these kinds of quantitative analyses are all trying to do measurements that are fundamentally about concentration, which is why I chose the words that I did.
5 replies →
>> That doesn’t mean that most of these plastic studies are doing the necessary controls
That was never my argument. Read it again.