Comment by lcnPylGDnU4H9OF
3 days ago
I often do something like
SUB_ME = ':sub_me'.freeze
def my_method(method_argument)
foo = 'foo_:sub_me'
foo.sub!(SUB_ME, method_argument)
foo
end
which, without `# frozen_string_literal: true`, I believe allocates a string when the application loads (it sounds like it might be 2) and another string at runtime and then mutate that.
That seems like it's better than doing
# frozen_string_literal: true
FOO = 'foo_:sub_me'
SUB_ME = ':sub_me'
def my_method(method_argument)
FOO.sub(SUB_ME, method_argument)
end
because that will allocate the frozen string to `FOO` when the application loads, then make a copy of it to `foo` at runtime, then mutate that copy. That means two strings that never leave memory (FOO, SUB_ME) and one that has to be GCed (return value) instead of just one that never leaves memory (SUB_ME) and one that has to be GCed (foo/return value).
This is true in particular when FOO is only used in `my_method`. If it's also used in `my_other_method` and it logically makes sense for both methods to use the same base string, then it's beneficial to use the wider-scope constant.
(The reason this seems reasonable in an application is that the method defines the string, mutates it, and sends it along, which primarily works because I work on a small team. Ostensibly it should send a frozen string, though I rarely do that in practice because my rule is don't mutate a string outside the context in which it was defined, and that seems sensible enough.)
Am I mistaken and/or is there another, perhaps more common pattern that I'm not thinking about that makes this desirable? Presumably I can just add # frozen_string_literal: false to my files if I want so this isn't a complaint. I'm just curious to know the reasoning since it is not obvious to me.
No comments yet
Contribute on Hacker News ↗