Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
Basically you just ignore the hyped up press releases, this just accompanies most semi-cool/exciting papers. The scientists probably know this isn't going to be some new storage that will become widespread but its just part of the game to sell the story like this and the administration wants this.
The fact that most of the world's data is still stored on little spinny disks, considering how many times in the last 40 years we've seen this story, is criminal.
The concept is interesting, but I'm getting a lot of red flags from this - there's no experimental data or proof-of-concept work at all, which makes this feel more like a blue-sky "Look what we could do if we could arrange atoms however we wanted!" pipe dream in the Drexlerian mode. Something about the writing style's also pinging my LLM radar, which while not disqualifying in-and-of-itself is very discouraging in combination with the other funkiness. The chemistry and manufacturability strike me as questionable in particular, and I'm not convinced the physics of reading and writing are nearly as clean as the author seems to think.
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
The fluorine doesn't pass through carbon. It passes between two neighboring carbons through a C-C gap of 2.64 Å at the transition state. This is pyramidal inversion — the same mechanism as ammonia (NH₃), but with a 4.6 eV barrier instead of 0.25 eV. The transition state geometry is computed and verified with one imaginary frequency.
This is a pipe dream and I’m almost tempted to say a fever dream. The chemistry part seems somewhat sound, even though that’s outside of my field of expertise. But the entire readout process is questionable, and has clear signs of heavy AI writing.
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted:
- handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out.
- some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible.
- a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell.
- constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
The caching comment refers to the Tier 1 controller holding a bitmap of bits it has already scanned — standard practice in any scanning probe system. It's not competing with the storage medium for capacity.
Tier 2 is explicitly labeled speculative. The paper's validation target is Tier 1: one C-AFM scan, one voltage pulse, existing equipment.
The core contribution is not the architecture — it's the physics: a verified transition state for C-F pyramidal inversion at 4.6 eV (B3LYP) and 4.8 eV (CCSD(T)), one imaginary frequency, barrier below bond dissociation. That's standard computational chemistry, not handwaving. The architecture sections are forward-looking by design.
The fluorine passes between two carbon neighbors through a C-C gap of 2.64 Å at the transition state — not through any atom. This is pyramidal inversion, the same mechanism as ammonia, but with a 4.6 eV barrier instead of 0.25 eV.
Yes, this paper is insane. The actual quote about caching is:
> Once a region of tape has been read, the controller stores the
result. Subsequent operations reference the cache rather than re-interrogating the physical
medium. Re-reading a known bit is unnecessary; the controller already holds its state
However, earlier, the paper claims:
> The transformer architectures underpin-
ning modern large language models are bandwidth-limited, not compute-limited [1–3]. The
energy consumed moving data between DRAM, NAND flash, and processor cache already
exceeds the energy consumed by arithmetic in datacenter AI accelerators [2]. This is not an
optimization problem. It is a materials problem [emphasis mine].
as part of a longer rant about the AI "memory wall" in the very first section. If we open with a long spiel about how memory is expensive in material cost and energy cost and this material is a solution for that then what are we caching the read in? On that note, what kind of computer engineer thinks about cache on the order of individual bits on a medium?
And, as you point out, 25 PBs is a lot. Around 1000x that of a typical on-die SRAM cache, I think.
A while later, the author speaks of using atomic force microscopy to read the data back. The size of AFM scans are, in practice, as I understand, along the order of square micrometers. I think this whole paper is an AI-driven, as you put it, 'fever dream', enabling an author to put forth 60 pages of sciencey claims and sciencey math without -- as far as I can tell -- any concrete and novel scientific result of any kind. AI-driven reality warps are not new; the difference is nowdays AIs are good enough at sounding smart to get past the barriers of a typical smart person who might want to be fooled or make a show of being open-minded. Later on, the author proposes using "shaped femtosecond IR pulses" -- without further elaboration -- to address single atoms! IR wavelengths are on the order of a micrometer at minimum!
Sniff test: a paper with a single author and 53 revisions, listing a gmail address as contact information despite the author, after a brief internet search, appearing to have affiliations with CSU Global, (maybe) the University of Central Florida, and the San Jose State University Department of Aerospace.
Author here. Three PhDs (Mathematics, Pisa; Quantum Chemistry, UCF; Materials Science, UTD — in progress), plus MS degrees from SJSU and CSU. The gmail is because this is independent work, not affiliated with any institution. v53 reflects thirteen years of development since the original 2013 publication (Graphene 1, 107–109). The barrier is verified at two independent levels of theory with a confirmed transition state. Happy to discuss the physics.
Curious if you've patented this? Very cool. The physics is way beyond me but I understand that each atom in the crystal can be in two states? And those are stable? There is no cross talk or decay at all?
You're comparing to current memory technologies but there are also some optical technologies like AIE-DDPR which presumably is (a lot?) less dense but has layers (I noticed you're also discussing a volumetric implementation), would devices based on your technology be simpler/faster? (I guess optical disks don't intend to replace high speed memory). What about access times?
Is there a reason you went for 3 PhDs? Especially since they're all in STEM? To me it's a red flag because the point of a PhD is to learn to do research, you don't need to get another one to move between fields (especially within STEM), just need to do research with people in those fields and gain experience.
Hey -- I have 0 PHDs so take this with a grain of salt :)
I had thought for a while about a way to store data that makes use of an idea that I had for sub-diffraction limited imaging inspired by STED microscopy.
First an overview of STED. You have a "donut" shaped laser (or toroidal laser) that is fired on a sample. This laser has an inner hole that is below the diffraction limit. This laser is used to deplete the ability of the sample to fluoresce, and then immediately after a second laser is shone on the same spot. The parts of the sample depleted by the donut laser don't fluoresce and so you only see the donut hole fluoresce. This allows you to image below the diffraction limit.
My idea was to apply this along with a layer in the material that exhibits sum frequency generation (SFG). The idea is that you can shine the donut laser with frequency A and a gaussian laser with frequency B at the same spot. When they interact in the SFG material you get some third frequency C as a result of SFG. Then, below that material would be a material that doesn't transmit frequencies C and A.
Then what you'd be left with after the light shines through those two layers is some amount of light at frequency B. The brightness inside the hole and outside of the hole would depend on how much of the light from frequency B converts into frequency C. Sum frequency generation is a very inefficient process, with only some tiny portion of the light participating, but my thinking is that if laser B is significantly less bright than laser A, then what will happen is that most of the light from laser B will participate in sum frequency generation where it mixes with laser A, and that you'll be left with only a tiny bit of laser A outside of the hole, so that you get a nice contrast ratio for the light at frequency A between the hole and the surroundings that then allow you to image whatever is below these layers below the diffraction limit.
In my idea the final layer is some kind of optical storage medium that can be be read/written by the laser below the diffraction limit. Obviously aiming this would be hard :) My idea was that it would be some kind of spinning disk, but I never really got to that point.
Have you considered subjecting this to expert scrutiny by submitting to a journal? That's probably better than getting hot takes on HN by random technology enthusiasts, skeptics, anon experts, and trolls.
Yes — the input files, level of theory, and software (ORCA 6.1.1, free for academics) are all specified in the paper. The calculations are fully reproducible.
"A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude."
Does that mean a scanning tunneling microscope is the I/O mechanism?
That's been demoed for atom-level storage in the past. But it's too slow for use.
Yes, Tier 1 is scanning probe — C-AFM specifically. Slow but sufficient for proof of concept. The paper describes a Tier 2 architecture using near-field mid-IR arrays for parallel read/write, projecting 25 PB/s aggregate throughput. Tier 1 proves the physics. Tier 2 is the engineering path to speed.
Not a typo. Fluorographene is the sp² form (Nair et al. 2010). Fluorographane uses the -ane suffix to denote full sp³ saturation — same convention as graphene → graphane. The sp³ hybridization is what creates the bistable C-F orientation that stores the bit.
Yeah, I've been baited by "breakthroughs" in storage technology for almost 40 years at this point [1]. I'll believe it when it's in Best Buy. Battery "breakthroughs" have really taken up the mantle of headline-grabbing research fund-raising articles so it's nice to see a throwback to the OG: storage.
I am about the same age and tarted loading programs off cassette tapes. The fact that I can get a terabyte of storage in a micro SD card the size of my pinkie nail for under $200 still impresses me.
It's always "research". I put that in quotes because any press like this isn't really "research", it's "fund-raising". It's the academic game of getting papers into the right publications, getting "street cred" by getting the right heavyweights as co-authors and to cite you, to become a "heavyweight" by doing the same thing and ultimately getting more grants to perpetuate the cycle.
Research can be interesting but so often none of it goes anywhere, it's just hype and there's a reproducibility crisis in academia. Look at the decades wasted on academic fraud and appeals to authority with Alzheimer's research [1].
Most of this media is the academic equivalent of "dcotors HATE This guy".
Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
> who cares if it can store an exabyte if it takes all month to read it
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
Basically you just ignore the hyped up press releases, this just accompanies most semi-cool/exciting papers. The scientists probably know this isn't going to be some new storage that will become widespread but its just part of the game to sell the story like this and the administration wants this.
The fact that most of the world's data is still stored on little spinny disks, considering how many times in the last 40 years we've seen this story, is criminal.
Aren't lasers driving the current 32TB+ HDD tech?
yeah but that wasn't a straight upgrade, either. HAMR has all sorts of tradeoffs.
The concept is interesting, but I'm getting a lot of red flags from this - there's no experimental data or proof-of-concept work at all, which makes this feel more like a blue-sky "Look what we could do if we could arrange atoms however we wanted!" pipe dream in the Drexlerian mode. Something about the writing style's also pinging my LLM radar, which while not disqualifying in-and-of-itself is very discouraging in combination with the other funkiness. The chemistry and manufacturability strike me as questionable in particular, and I'm not convinced the physics of reading and writing are nearly as clean as the author seems to think.
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
The fluorine doesn't pass through carbon. It passes between two neighboring carbons through a C-C gap of 2.64 Å at the transition state. This is pyramidal inversion — the same mechanism as ammonia (NH₃), but with a 4.6 eV barrier instead of 0.25 eV. The transition state geometry is computed and verified with one imaginary frequency.
This is a pipe dream and I’m almost tempted to say a fever dream. The chemistry part seems somewhat sound, even though that’s outside of my field of expertise. But the entire readout process is questionable, and has clear signs of heavy AI writing.
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted: - handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out. - some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible. - a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell. - constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
Author here. Some fair points, some misreadings.
The caching comment refers to the Tier 1 controller holding a bitmap of bits it has already scanned — standard practice in any scanning probe system. It's not competing with the storage medium for capacity.
Tier 2 is explicitly labeled speculative. The paper's validation target is Tier 1: one C-AFM scan, one voltage pulse, existing equipment.
The core contribution is not the architecture — it's the physics: a verified transition state for C-F pyramidal inversion at 4.6 eV (B3LYP) and 4.8 eV (CCSD(T)), one imaginary frequency, barrier below bond dissociation. That's standard computational chemistry, not handwaving. The architecture sections are forward-looking by design.
The fluorine passes between two carbon neighbors through a C-C gap of 2.64 Å at the transition state — not through any atom. This is pyramidal inversion, the same mechanism as ammonia, but with a 4.6 eV barrier instead of 0.25 eV.
Magnetic tape comparison is in Table 2.
Yes, this paper is insane. The actual quote about caching is:
> Once a region of tape has been read, the controller stores the result. Subsequent operations reference the cache rather than re-interrogating the physical medium. Re-reading a known bit is unnecessary; the controller already holds its state
However, earlier, the paper claims:
> The transformer architectures underpin- ning modern large language models are bandwidth-limited, not compute-limited [1–3]. The energy consumed moving data between DRAM, NAND flash, and processor cache already exceeds the energy consumed by arithmetic in datacenter AI accelerators [2]. This is not an optimization problem. It is a materials problem [emphasis mine].
as part of a longer rant about the AI "memory wall" in the very first section. If we open with a long spiel about how memory is expensive in material cost and energy cost and this material is a solution for that then what are we caching the read in? On that note, what kind of computer engineer thinks about cache on the order of individual bits on a medium?
And, as you point out, 25 PBs is a lot. Around 1000x that of a typical on-die SRAM cache, I think.
A while later, the author speaks of using atomic force microscopy to read the data back. The size of AFM scans are, in practice, as I understand, along the order of square micrometers. I think this whole paper is an AI-driven, as you put it, 'fever dream', enabling an author to put forth 60 pages of sciencey claims and sciencey math without -- as far as I can tell -- any concrete and novel scientific result of any kind. AI-driven reality warps are not new; the difference is nowdays AIs are good enough at sounding smart to get past the barriers of a typical smart person who might want to be fooled or make a show of being open-minded. Later on, the author proposes using "shaped femtosecond IR pulses" -- without further elaboration -- to address single atoms! IR wavelengths are on the order of a micrometer at minimum!
Sniff test: a paper with a single author and 53 revisions, listing a gmail address as contact information despite the author, after a brief internet search, appearing to have affiliations with CSU Global, (maybe) the University of Central Florida, and the San Jose State University Department of Aerospace.
Author here. Three PhDs (Mathematics, Pisa; Quantum Chemistry, UCF; Materials Science, UTD — in progress), plus MS degrees from SJSU and CSU. The gmail is because this is independent work, not affiliated with any institution. v53 reflects thirteen years of development since the original 2013 publication (Graphene 1, 107–109). The barrier is verified at two independent levels of theory with a confirmed transition state. Happy to discuss the physics.
What were the topics and titles of your dissertation in the first two PhD? Were they related to this topic or totally different?
Edit: https://www.mathgenealogy.org/id.php?id=61429 It looks quite unrelated
1 reply →
Curious if you've patented this? Very cool. The physics is way beyond me but I understand that each atom in the crystal can be in two states? And those are stable? There is no cross talk or decay at all?
You're comparing to current memory technologies but there are also some optical technologies like AIE-DDPR which presumably is (a lot?) less dense but has layers (I noticed you're also discussing a volumetric implementation), would devices based on your technology be simpler/faster? (I guess optical disks don't intend to replace high speed memory). What about access times?
1 reply →
Is there a reason you went for 3 PhDs? Especially since they're all in STEM? To me it's a red flag because the point of a PhD is to learn to do research, you don't need to get another one to move between fields (especially within STEM), just need to do research with people in those fields and gain experience.
11 replies →
That’s amazing. Do you have a home lab with an atomic microscope where you do your research?
And what’s the reason for going solo vs a research university, where I assume this type of research could be significantly sped up?
2 replies →
Hey -- I have 0 PHDs so take this with a grain of salt :)
I had thought for a while about a way to store data that makes use of an idea that I had for sub-diffraction limited imaging inspired by STED microscopy.
First an overview of STED. You have a "donut" shaped laser (or toroidal laser) that is fired on a sample. This laser has an inner hole that is below the diffraction limit. This laser is used to deplete the ability of the sample to fluoresce, and then immediately after a second laser is shone on the same spot. The parts of the sample depleted by the donut laser don't fluoresce and so you only see the donut hole fluoresce. This allows you to image below the diffraction limit.
My idea was to apply this along with a layer in the material that exhibits sum frequency generation (SFG). The idea is that you can shine the donut laser with frequency A and a gaussian laser with frequency B at the same spot. When they interact in the SFG material you get some third frequency C as a result of SFG. Then, below that material would be a material that doesn't transmit frequencies C and A.
Then what you'd be left with after the light shines through those two layers is some amount of light at frequency B. The brightness inside the hole and outside of the hole would depend on how much of the light from frequency B converts into frequency C. Sum frequency generation is a very inefficient process, with only some tiny portion of the light participating, but my thinking is that if laser B is significantly less bright than laser A, then what will happen is that most of the light from laser B will participate in sum frequency generation where it mixes with laser A, and that you'll be left with only a tiny bit of laser A outside of the hole, so that you get a nice contrast ratio for the light at frequency A between the hole and the surroundings that then allow you to image whatever is below these layers below the diffraction limit.
In my idea the final layer is some kind of optical storage medium that can be be read/written by the laser below the diffraction limit. Obviously aiming this would be hard :) My idea was that it would be some kind of spinning disk, but I never really got to that point.
Have you considered subjecting this to expert scrutiny by submitting to a journal? That's probably better than getting hot takes on HN by random technology enthusiasts, skeptics, anon experts, and trolls.
5 replies →
Sniff test as in you turned your nose up without even looking at it on a purely surface level based on affiliation.
Smells like laziness to me.
I suppose anyone can run the same computer simulations.
Yes — the input files, level of theory, and software (ORCA 6.1.1, free for academics) are all specified in the paper. The calculations are fully reproducible.
"A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude."
Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.
Yes, Tier 1 is scanning probe — C-AFM specifically. Slow but sufficient for proof of concept. The paper describes a Tier 2 architecture using near-field mid-IR arrays for parallel read/write, projecting 25 PB/s aggregate throughput. Tier 1 proves the physics. Tier 2 is the engineering path to speed.
What do you need to build a demo of Tier 2? I am guessing if you can do that then you can get an investor.
4 replies →
Using a mid-IR array with sub 10nm resolution is anything but an engineering path. Tech like that has never left the lab afaik.
1 reply →
Perhaps title had a typo?
fluorographane -> Fluorographene
Can't find a single page about fluorographane
https://en.wikipedia.org/w/index.php?search=fluorographane&t...
But this
https://en.wikipedia.org/wiki/Fluorographene
Not a typo. Fluorographene is the sp² form (Nair et al. 2010). Fluorographane uses the -ane suffix to denote full sp³ saturation — same convention as graphene → graphane. The sp³ hybridization is what creates the bistable C-F orientation that stores the bit.
TIL thanks!
Fluorographane: Synthesis and Properties (pdf)https://pubs.rsc.org/en/content/getauthorversionpdf/C4CC0884...
Remarkable. If this material works and is flexible enough, we could someday see tape drives with hundreds of exabytes of capacity.
Author here. The paper describes exactly this — a nanotape spool architecture with volumetric density of 0.4–9 ZB/cm³. Section 4.4 in the preprint.
Too long, not gonna read. When do I get my 447TB iPhone?
The Now I Get non-technical version, because I need someone to explain this to me x)
https://nowigetit.us/pages/d7f94fd0-e608-47f9-8805-429898105...
Any sufficiently advanced technology is indistinguishable from magic, as proven by the number of comments treating the paper as an AI slop pipe dream.
Yeah, I've been baited by "breakthroughs" in storage technology for almost 40 years at this point [1]. I'll believe it when it's in Best Buy. Battery "breakthroughs" have really taken up the mantle of headline-grabbing research fund-raising articles so it's nice to see a throwback to the OG: storage.
[1]: https://www.tampabay.com/archive/1991/06/23/holograms-the-ne...
I am about the same age and tarted loading programs off cassette tapes. The fact that I can get a terabyte of storage in a micro SD card the size of my pinkie nail for under $200 still impresses me.
This is research...
It's always "research". I put that in quotes because any press like this isn't really "research", it's "fund-raising". It's the academic game of getting papers into the right publications, getting "street cred" by getting the right heavyweights as co-authors and to cite you, to become a "heavyweight" by doing the same thing and ultimately getting more grants to perpetuate the cycle.
Research can be interesting but so often none of it goes anywhere, it's just hype and there's a reproducibility crisis in academia. Look at the decades wasted on academic fraud and appeals to authority with Alzheimer's research [1].
Most of this media is the academic equivalent of "dcotors HATE This guy".
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC12397490/
1 reply →
I mean battery breakthroughs are real though? BYD is now demoing 0-80% in 5 mins on production vehicles in China.
The price of the 50kwh unit I had put into my house was very low.
Sodium ion is ramping up too but is commercially available. That straight wasn't possible a few years ago till the electrode breakthroughs.
Do you have any pointers on said 50kWh battery? Asking for a friend.
2 replies →
[dead]
[dead]
[dead]