Comment by touisteur

13 days ago

Every sensor in the array is sampling at frequency, so - first order - you can use that sampling frequency and the sample size, you get an idea of the input bandwidth in bytes/second. There are of course bandwidth reduction steps (filtering, downsampling, beamforming)...

This makes no sense though? Given the Nyquist theorem, simply increasing sampling frequency past a certain step doesn't change the outcome.

  • Actually, it does. You can decimate the higher sample rate to increase dynamic range and S/N ratio.

    Also, for direct down conversion, you can get better mirror frequency rejection by oversampling and filtering in software.

    • None of this changes the actual real amount of data you have at the end of the day though after all is said and done, that's what I mean, so long as you don't botch it and capture too little. In computing terms, the amount of real data in a compressed archive and the uncompressed original is the same, even if the file size is larger for the latter.

Aren't they sampling broadband for later processing?

  • On SKA from what I understand they're sampling broadband but quickly beamform and downsample as the datarates would be unsustainable to store over the whole array.

    • Right, that makes sense, you'd be looking at an insane amount of data across the ranges that these sensors can look at. But they would still need to preserve phase information if they want to use the array for what it is best at and that alone is a massive amount of data.

      6 replies →