Comment by bee_rider
4 months ago
From their code:
A = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
B = torch.randn(2048, 2048, device='cuda', dtype=torch.bfloat16)
ref = torch.mm(A, B)
for _ in range(1000):
assert (torch.mm(A, B) - ref).abs().max().item() == 0
I’m sort of surprised that Torch doesn’t have some kind of lazy evaluation thing to avoid computing anything here. I thought that was one of the nice things about all these fancy frameworks (if I wanted the computer to actually do silly things when I asked it to, I would use BLAS directly, right?).
Maybe I'm missing something, but in this case, wouldn't being lazy would be pure overhead? I don't see anything can be lazy here. The reference computed once, nanoseconds before it's needed, and test cases computed at the time of comparison, then tossed away.
What would hope to be achieved by making this case lazy? If you wanted these to run in parallel, with a multi-gpu system, you would use the appropriate parallel interface.
I mean if you wait long enough, it is asking for
of something that can be identified as definitionally zero.
I don't understand. Since it's not using the parallel interface, only one operation can happen at a time. This would be, literally, sequential execution with extra overhead, in this case. Again, in this case, what would hope to be achieved from doing things lazily, since the lazy operations would immediately be followed by their evaluation?
The parallel interface, which is async, is probably what you're lookin for.
4 replies →