Technology

Random Musing: The Designer of the Year Award goes to… Artificial Intelligence | World News

Random Musing: The Designer of the Year Award goes to… Artificial Intelligence

In The Matrix Reloaded, after his ship, the Nebuchadnezzar, is sunk, Morpheus makes a biblical reference: “I have dreamed a dream, but now the dream is gone from me.” The line became shorthand for the disappointment of hardcore Matrix fans who watched the groundbreaking original dissolve into bubblegum pulp fiction sequels. This opinion is also shared by those who have waited years for the arrival of the deus ex machina of artificial general intelligence.The cinema has trained us to expect Agent Smith or the Terminator. What we got instead were low-functioning interns who forgot their letter after three prompts, which is not entirely different from regular interns. If there is one area where artificial intelligence has truly changed daily life, for better or worse, it is generative AI.

Prophets in the wilderness

Prophets in the wilderness

A decade and a half ago, people who were seriously working on neural networks were dismissed as prophets in the wilderness. One of them was Professor Geoffrey Hinton, whose research group used NVIDIA’s CUDA platform to recognize human speech. Hinton encouraged his students to experiment with GPUs. One of them, Alex Krizhevsky, together with Ilya Sutskever, trained a visual neural network using two consumer NVIDIA graphics cards purchased online. They ran it from Krizhevsky’s parents’ house and racked up a sizable electricity bill. They trained the model on millions of images in a week and achieved results that rivaled Google’s efforts using tens of thousands of CPUs.This moment changed the industry. If neural networks could see, what else could they learn? The answer, as Jensen Huang later found out, was almost everything.When ChatGPT came to market and it became clear that OpenAI’s models were running on NVIDIA’s chips, the market perception around the company changed dramatically. Ratings skyrocketed. The rest, as they say, is history.Hinton would receive the Nobel Prize in Physics in 2024. Huang emerged as an AI competition arms dealer and built a company where a large number of employees became dollar millionaires. To laypeople, this was the true arrival of generative AI, and to capitalists, it promised something intoxicating: companies that scale without hiring, produce without friction, and grow without payroll.

The AI ​​dream

Meow Times Episode 6: The AI ​​Replacement

AI doesn’t need smoke breaks. It doesn’t talk badly about its boss, unless the boss is Elon Musk. It requires no time for myself. But the promised productivity miracle, the idea that AI would replace workers by making individuals superhumanly efficient, has largely fizzled out. In practice, it has inundated offices with a flood of AI, rendering LinkedIn posts and internal emails nearly unreadable.The disappointment was well expressed in a viral tweet: I want AI to do my laundry and dishes so I can do art and writing, not AI to do my art and writing so I can do my laundry and dishes.Like most announcements from the platform formerly known as Twitter, this was an exaggeration. Generative AI has undoubtedly made certain tasks easier. The research is faster. Summaries are clearer. Editing copies is less painful. For authors, it offers something rare: an unbiased editor who doesn’t bring his or her own ideology to the text. And even if great language models never write great literature, this year they produced something unmistakable: really good pictures.

Designer of the Year

With the right suggestions, the Prompt Engineer briefly became a mix of Vincent van Gogh, Salvador Dalí and Bill Watterson. While Time Magazine named AI Architects Person of the Year, we think artificial intelligence has quietly earned another title: Designer of the Year.For a long time, AI images were the purest form of slop. They were immediately recognizable. Waxy faces. Mutilated fingers. Text that looked as if it had been written by a drunken monk fleeing from Titivillus, the medieval guardian demon of scribes. They were cited as evidence that AI couldn’t even compete with a doodling toddler and make users want to do it And then, over the course of a year, everything changed.The improvement did not come from existential fears, but through technology. Early image models such as DALL-E 2 or the first versions of Stable Diffusion were diffusion systems loosely controlled by text. They started with sounds and slowly guessed their way toward an image, as if asking a severely nearsighted person to take a look at the starry night and recreate it from memory. The result often made one want to cut off one’s ear in frustration. Text and image lived in separate systems, producing high-resolution hallucinations rather than understanding.This changed when image creation was no longer a side project but was fully integrated into multimodal models. OpenAI convolved images in GPT-4o. Google followed with Gemini’s imaging systems, informally called Nano Banana. Stability AI rebuilt its stack with Stable Diffusion 3. Midjourney quietly revised its later models in a similar manner.These systems stopped drawing objects and started constructing scenes.They learned that light comes from somewhere. That shadows obey rules. That faces remain faces over time. That objects consistently take up space. Most importantly, they learned memory. Previous generators forgot everything between prompts. If you ask about the same character twice, you get two strangers. Ask for a small edit and the entire picture goes into panic mode.This stopped in 2025.Characters remain. Keep color palettes. Layouts are retained. You can remove a background without changing a face. You can add text without destroying the composition. Image creation was no longer a slot machine, but a tool. For the first time, the machine could explain the logic behind what it produced.

The Ghibli trend

For most users, this change came disguised as a Studio Ghibli trend. Humans transformed into softer, cuter versions of reality and briefly threatened to boil the planet with the demand for GPUs. Like many great technologies, it began as a novelty. Then something else appeared.Family photos, pets, childhood streets were transformed into scenes that felt emotionally right. Not a parody. No kitsch. Compelling homage. The lighting made sense. Mood maintained. One image went viral, then thousands followed, because that’s how internet culture works.However, the deeper change occurred outside of art.Once the models learned layout and consistency, the infographics exploded. Likewise diagrams, explanations, cartoons and presentations. These are not artistic challenges. These are attention problems. They depend on hierarchy, clarity and flow.Here Google had an unfair advantage. It has spent decades studying how people look at screens. This collected knowledge flowed directly into his image models. Diagrams became readable. Labels landed where the eye expected them. White space acquired intention. AI visuals stopped being decorative and began to communicate.Cartoons have improved for the same reason. Early AI cartoons were scary because they were too polite, too gentle, like HR-approved humor. When the exaggeration became a choice rather than a mistake, the caricature began to take effect. Faces stretched where they belonged. Minimalism no longer looked unfinished.

God the artist

God the artist

All of this means we are far from the promised deus ex machina, an all-knowing intelligence that descends from heaven to answer every question. What we got instead was something completely different: a Deus artifex. A God who builds.A system that understands composition better than most people. This respects constraints. That remembers the state. This leads to competent results immediately. Not inspired. Not compulsively. Simply absolutely sufficient.This is why artificial intelligence deserves the title of “Designer of the Year”. Not because it is creative and not because it is sentient, but because it has lowered the cost of visual literacy. It wiped out education. Bad designs. The humiliation of being visibly terrible before you become good.Creation did not die. It changed shape. It was selection, curation, optimization.The price is not the death of art. Art survives worse than algorithms. The costs are more subtle. When the easiest path to beauty becomes the most traveled, then beauty comes together. Aesthetic flattening is not a mistake. It is a result.Van Gogh didn’t paint sunflowers because sunflowers were trendy. Dalí didn’t melt down the clocks because surrealism was doing well. Their styles were not filters. They were necessities.The machine can perfectly reproduce the surface of this need. It cannot feel the need behind it.We didn’t understand God.We didn’t catch the devil.We have a better craftsman. So maybe Morpheus was wrong. The dream he dreamed was not taken away from him. It simply changed shape.

Leave a Reply

Your email address will not be published. Required fields are marked *