← Back to context

Comment by kortilla

6 years ago

Those all proved useful though using real empirical based systems. If your programming class was, “what do you think the author meant by this line of code?” with no evidence to support any kind of guess, then yes it would be dumb.

As I said in another comment, at this level "literary criticism" is intended to develop rudimentary skills of textual interpretation. You learn what symbolism is and are asked to produce some examples. It doesn't matter what the poem "actually" means any more than it matters that you're ignoring friction and variable speeds and the changing mass of trains as they burn fuel when you work out how fast they're going in school math tests. When you get to more advanced textual analysis, you have to justify readings based on the work as a whole, the writer's oeuvre, historical context, linguistic concerns, philosophical frameworks, and so on.

  • > at this level "literary criticism" is intended to develop...skills of textual interpretation

    But it is also at this level that multiple-choice tests about which bits of a text mean what (as the original article illustrates) proliferate. It is at this level that the textbook is the One Truth about what a text means. As one other comment points out, at this level there is no recourse but "authority" which is, ironically, detached from the author.

    > It doesn't matter what the poem "actually" means any more than it matters that you're ignoring friction

    This is a false equivalence. When we ignore 'what the poem "actually" means' it opens up interpretation to a colossal array of extremely subjective possibilities. "The curtains are blue" could symbolize anything from the author's depression to his thoughts about the Yuan dynasty. In contrast, the effects of ignoring friction can be objectively proven (if not empirically demonstrated) and would give us a definite answer, no mental gymnastics required.

    In real-life engineering, we also tolerate a certain amount of slack and simplification in our models so long as the discrepancy between computation and observation is not so drastic. In this sense, ignoring those variables is tantamount to practice in iterative refinement (you are launching a projectile in an indoors gymnasium; you wouldn't really care if your computation is off by a foot or so because you ignored air resistance and the curvature of the Earth). The simplification can still provide value (not to mention practicality). But eking out an explanation just for the sake of giving an explanation, relegating "the work as a whole, the writer's oeuvre, historical context, linguistic concerns, philosophical frameworks, and so on" to more advanced levels, is just sloppy scholarship and build poor habits.

    • Your argument seems to boil down to this. Only empirical disciplines count as valid scholarship. Interpretive disciplines in the humanities are not empirical in the same way as engineering and scientific disciplines. Therefore they are not valid scholarship.

      Unless you're prepared to budge on the first premise, there's nothing much I can say to persuade you.