I had a professor in an additive combinatorics class that would (when appropriate) say “hint: it’s easy” and as silly as it is, it usually helped a lot.
I worked on a problem for a couple months once. As soon as my professor hit mid-sentence telling me he found someone with the solution, I rudely blurted it out.
My mind was so familiar with all the constraints, all I had to know was that there was a solution and I knew exactly where it had to be.
But before knowing there was a solution I hadn't realized that.
Failure doesn’t teach by default; it teaches only when you design for it. Three dials matter: cost, frequency, and observability.
Make failures cheap and reversible. Shrink the scope until a rollback is boring. If a failure requires a committee or a quarter to undo, you’ll avoid the very experiments you need.
Raise frequency deliberately. Schedule “bad ideas hour” or small probes so you don’t wait for organic disasters to learn.
Max out observability. Before you try, write the few assumptions the test could falsify. Log what would have changed your mind earlier (counterfactual triggers), not just what happened.
Two practices that compound:
1. Pre-mortem → post-mortem symmetry. In the pre-mortem, list concrete failure modes and “tripwires”; in the post-mortem, only record items that map back to one of those or add a new class with a guardrail/checklist—not “be more careful.”
2. Separate noise from surprise. Tag outcomes as variance vs. model error. Punishing variance breeds risk aversion; fixing model error improves judgment.
Hard problems rarely yield to heroics; they yield to lots of small, instrumented failures.
> The [goal] of machine learning research is to [do better than humans at] theorem proving, algorithmic problem solving, and drug discovery.
Naively, one of those things is not like the others.
When I run into things like this, I just stop reading. My assumption is that a keyword is being thrown in for grant purposes. Who knows what other aspects of reality have been subordinated to politics by the writer.
These have all been stated as goals by various machine learning research efforts. And -- they're actually all examples in which a better search heuristic through an absolutely massive configuration space is helpful.
The most important clue to solving a difficult problem is knowing that somebody else has already solved it.
I had a professor in an additive combinatorics class that would (when appropriate) say “hint: it’s easy” and as silly as it is, it usually helped a lot.
I worked on a problem for a couple months once. As soon as my professor hit mid-sentence telling me he found someone with the solution, I rudely blurted it out.
My mind was so familiar with all the constraints, all I had to know was that there was a solution and I knew exactly where it had to be.
But before knowing there was a solution I hadn't realized that.
The 4 minute mile comes to mind
The problem is time and resources.
Take building a viable company. You know that many people have solved this. But you also know that 9/10 fail.
So you need the time and the money to try enough times to make it work.
9/10 vc backed companies fail. Not "companies." Ignore the hype and you'll be more likely to succeed.
2 replies →
Failure doesn’t teach by default; it teaches only when you design for it. Three dials matter: cost, frequency, and observability.
Make failures cheap and reversible. Shrink the scope until a rollback is boring. If a failure requires a committee or a quarter to undo, you’ll avoid the very experiments you need.
Raise frequency deliberately. Schedule “bad ideas hour” or small probes so you don’t wait for organic disasters to learn.
Max out observability. Before you try, write the few assumptions the test could falsify. Log what would have changed your mind earlier (counterfactual triggers), not just what happened.
Two practices that compound:
1. Pre-mortem → post-mortem symmetry. In the pre-mortem, list concrete failure modes and “tripwires”; in the post-mortem, only record items that map back to one of those or add a new class with a guardrail/checklist—not “be more careful.”
2. Separate noise from surprise. Tag outcomes as variance vs. model error. Punishing variance breeds risk aversion; fixing model error improves judgment.
Hard problems rarely yield to heroics; they yield to lots of small, instrumented failures.
Is this related to the article?
qqxufo's recent posts read like a large langle mangle to me
> The [goal] of machine learning research is to [do better than humans at] theorem proving, algorithmic problem solving, and drug discovery.
Naively, one of those things is not like the others.
When I run into things like this, I just stop reading. My assumption is that a keyword is being thrown in for grant purposes. Who knows what other aspects of reality have been subordinated to politics by the writer.
These have all been stated as goals by various machine learning research efforts. And -- they're actually all examples in which a better search heuristic through an absolutely massive configuration space is helpful.
You must not end up reading much scientific literature then.
What's the issue with drug discovery? AI/ML assisted drug discovery is one of the better examples of successful AI utilization out there.
which one do you think is unlike the others?
How does this compare to just reducing the likelihood of negative samples?