← Back to context

Comment by gitfan86

1 year ago

This fundamentally misunderstand what LLMs are. They are compression algorithms. They have been trained on millions of descriptions and pictures of beaches. Because much of that input will include palm trees the LLM is very likely to generate a palm tree when asked to generate a picture of a beach. It is impossible to "fix" this without making the LLM bigger.

The solution to this problem is to not use this technology for things it cannot do. It is a mistake to distribute your political agenda with this tool unless you somehow have curated a propagandized training dataset.