Comment by maccard
1 day ago
I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.
int buf_size = 10000000;
auto vec = make_large_array(buf_size);
for (const auto& val : vec)
{
do_expensive_thing(val);
}
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.
If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); }
the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL }
With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...
arr_size = 10000000
buf = [None] * arr_size
for x in buf:
do_expensive_thing(x)
could be parallelised at compile time.
Which no one really does (data is generally provided at runtime). Which is why ‘super smart’ compilers kinda went no where eh?
I dunno. I was promised the same things when I started programming and it never materialised.
It doesn’t matter what people do or don’t do because this is a hypothetical feature of a hypothetical language that doesn’t exist.
huh?
2 replies →