Comment by grayhatter
1 year ago
your question was non specific so guessing a bit at what you're asking, because some of it is already answered in the docs... but conceptually it's similar how gps triangulation works, but in the other direction, (information flows from the source point, speaker in this case, to the mic array) and with audio waves instead of rf waves. Each mic will have a slightly different view of the audio coming in, and using the timing between them, you can use the wave form that one mic records to figure out what's to early or too late to be audio from directly in front of the laptop. And then delete that audio, leaving just audio from the speaker directly in front of the laptop.
eg
A ------ MIC1 --- B --- MIC2 ------ C
any sound coming from A, will be picked up by MIC1 well before MIC2, same for sounds coming from C. If you delete that audio from the income waveform, you have beam forming. And thus much better audio noise filtering.
And as it says in the link, Apple decided to implement this is software, not hardware, so you'd need to reimplement it if you're not using macos.
No comments yet
Contribute on Hacker News ↗