Comment by szvsw
3 months ago
> feeling overwhelmed only for brief periods of time
There is something deeply, darkly comedic (depressing?) about the qualitative language here. Primarily the way it simultaneously intersects with modern discourse around wellness, anxiety, and mental health in such a banal manner at the same time as the latent/implicit violence of action (given that the obvious subtext is operating semi-autonomous killing machines).
Agreed- they write as if being overwhelmed 3% of the time is a victory. A good system would have people feeling overwhelmed 0% of the time.
>A good system would have people feeling overwhelmed 0% of the time.
There are benefits to being pushed past your limits from time to time. Also, there's just no such thing as 0. When you're designing limits you don't say "this never happens", you're saying "this event happens less than this rate for this cohort".
I'd agree that it is worth pushing your limits during training, but the best-case scenario during actual conflict is to be as close to 0% overwhelmed as you can be.
7 replies →
> they write as if being overwhelmed 3% of the time is a victory
We’re talking about a soldier. Commanding a company’s worth of firepower single-handedly from relative safety. 3% would be an exceptional improvement over the status quo.
The real question is what happens in that 3%. If they are still able to control the drones that is very different from they set the drones to kill your own people. (This is DARPA so we can assume killing people is a goal in some form). There is a lot in between too.
This is a common error, if not outright fallacy. The correct amount of <negative event> is rarely zero due to diminishing returns -- it is where the cost curves intersect.
E.g. to decrease 3pct to 0.3pct might require operating only half the drones -- not a good trade.
This is peacetime thinking. If you've got a whole army trying to kill you, you're going to get overwhelmed sometimes.
Compare it to a control group - I feel overwhelmed at least 5% of the time and I’m not even controlling any robots.
Yeah I really don't like that phrasing. Take off and landing is the most dangerous part of flying but only makes up a tiny percentage of the total flight. If that 3% of the time referenced is the most dangerous or most critical 3% of time then it hardly matters how easy the rest of it is.
This is about the army. Depending on the case, it's acceptable that 30% of people die if it serves strategic goals. That's how "a good system" is defined by those who have the power to enact it .
That sentence could come from an Onion news report about worker productivity.
It's DARPA, you're really past the moralizing about war stage here, that's just out of context. I don't see UX experts hand-wringing about the effects of advertising when they're designing their products.
>discourse around wellness, anxiety, and mental health in such a banal manner
It's not about "feelings" and that might disturb you, but really very many things should be much less about feelings. A whole lot of "wellness, anxiety, and mental health" isn't about feelings but instead being inside or outside the limits of what a person is capable of handling. Facts-based analysis of work and life and people being too far outside their comfort zone could do a lot for many people dealing with mental health issues.
DARPA does and obviously _needs to_ study these things. One of the most important areas for this are pilots especially during emergencies. It comes from both directions, designing the machine to be manageable and training the human to manage in exceptional circumstances and _knowing the limits_ of both.
Congratulations, you cured the mental illness epidemic, depressed people just had to push their limits! Why didn't anyone think of that before?
> Why didn't anyone think of that before?
They did
https://oibr.uga.edu/low-to-moderate-levels-of-stress-can-be...
> It's DARPA, you're really past the moralizing about war stage here, that's just out of context.
I don’t really think I was moralizing… just commenting on the funny juxtaposition of the language and the context - or on the comedy of the language specifically when not considering the whole context. I was not saying DARPA should or should not be doing this - though I’ll grant that what I wrote could be read as an implicit criticism, even though it was not my intention.
> I don't see UX experts hand-wringing about the effects of advertising when they're designing their products.
Plenty do. Plenty don’t. Similarly, plenty of machine learning engineers might choose not to work on, say, a predictive algorithm for facial recognition or a product recommender system because they don’t feel like being a part of that system. Some people don’t have that luxury, or don’t care. It’s fine either way, though I of course encourage anyone to do some reflection on the social implications of their engineering projects from time to time. Hamming, who worked on everything from the ABomb to telephones to the foundations of computer programming (and everything in between) strongly recommends this, and I agree. Working on weapons might be necessary, but you still need to reflect and make a conscious decision about it.
> It's not about "feelings" […] It comes from both directions, designing the machine to be manageable and training the human to manage in exceptional circumstances and _knowing the limits_ of both.
Of course, totally understand that. That doesn’t mean we can’t find humor in decontextualizing the language! Or in thinking about how science always must struggle with euphemism for the purposes of concision.