← Back to context

Comment by touisteur

8 hours ago

Every sensor in the array is sampling at frequency, so - first order - you can use that sampling frequency and the sample size, you get an idea of the input bandwidth in bytes/second. There are of course bandwidth reduction steps (filtering, downsampling, beamforming)...

This makes no sense though? Given the Nyquist theorem, simply increasing sampling frequency past a certain step doesn't change the outcome.

  • Actually, it does. You can decimate the higher sample rate to increase dynamic range and S/N ratio.

    Also, for direct down conversion, you can get better mirror frequency rejection by oversampling and filtering in software.

Aren't they sampling broadband for later processing?

  • On SKA from what I understand they're sampling broadband but quickly beamform and downsample as the datarates would be unsustainable to store over the whole array.

    • Right, that makes sense, you'd be looking at an insane amount of data across the ranges that these sensors can look at. But they would still need to preserve phase information if they want to use the array for what it is best at and that alone is a massive amount of data.

      2 replies →