Large language model dilemma

How can we use large language models in our lives to improve the quality of life for our species? Well, apparently AI arrived a bit later than what was predicted in, for example, the movie 2001: A Space Odyssey. We expected to have HAL in the early 21st century. But only in recent years have large language models emerged. Was it a sudden breakthrough? Not exactly. One must look at the underlying framework that enabled this development. How do you train a model? The idea of large neural networks has been discussed since the 1980s. We understood the theory, but the technical realization took nearly 40 years of sustained work. So, as these models emerged, many were amazed by their ability to engage in conversations with us almost like a real human. Earlier chatbot systems existed and were used by some companies, but this new generation is fundamentally different in its capacity to understand context, generate coherent responses, and sustain meaningful dialogue. This is achieved through training on vast datasets and fine-tuning with reinforcement learning techniques to align with human intent and conversational patterns. Their conversational depth allows them to recall previous turns, adjust tone dynamically, and even mimic empathy—creating an illusion of shared understanding. No surprise why so many feel this might take their job away—because there’s a naive fear that this machine is somehow more intelligent than me. But that could be a trap. Human intellect isn’t directly comparable. A calculator does arithmetic better than a human, but does that mean it’s more intelligent? The comparison itself may be flawed. The current debate is: what is the agency of the human? Of course, we feel we have agency in our desire to solve problems—as much as we can—by using these machines. But what about the concept of work, and the meaning of human effort? That’s the harder question. This becomes especially complex when it comes to art. Perhaps in bureaucracy, most people agree that AI can genuinely improve quality of life by making systems more efficient. But when it comes to creating a movie or writing a book, what is the point if a machine can do it? Where does that leave the human need for expression, originality, and meaning? AI can replicate tasks, mimic styles, and even produce content that feels human—but it cannot experience meaning. Human agency doesn’t just lie in solving problems efficiently; it lies in choosing which problems matter, and why. In bureaucracy, AI can be a powerful tool for optimization. But in art, storytelling, and creation, the point isn’t just the end product—it’s the process of making meaning, of expressing what it’s like to be human. A machine can generate a poem, but it doesn’t need to write one. You do. So it is up to now what type of art human might be satisfied with. I just think the terminology of fast food works here. Fast food is food, but for someone who seeks a coherent and healthy diet it does not