Losing our taste
This week, as everyone was trying out the latest OpenAI image generation tools and flooding their social media with them, it occurred to me: are we losing our taste?
I don’t necessarily mean that AI-generated images are taste-less just that the creator isn’t really engaging with the medium anymore. You ask for a picture of your dog in a meme and you get it! But is that exactly what you wanted? Are the details how you wanted them? Or are you just accepting what the machine is giving you?
The same thing with “vibe coding”: the AI is building a new app for you but are you still exercising your taste for how the code should be structured, or what APIs to use?
We get some output but are we really exercising our intention, our discretion, our will, our human creativity, our taste in creating that output? Or is it becoming more a case of “monkey push button; machine goes whrrrrrr”?
Corollary 1: How will we retain and develop our taste?
Maybe you’re still using your years of experience to say “no, I don’t want to use React for this project” or “I’d prefer to break this out to a new class”. Maybe you are. You probably are. But will you continue to?
Will you continue to exercise your critical judgement or will you slowly let that “taste” atrophy and just accept the output of the machine? Because exercising your will to get exactly what you want, in the way you wanted it built, is exhausting and that tantalisingly-easy path is going to become more and more tempting.
One thing we know about automation is that it can reduce attention and cause our skills to atrophy. It’s observed in self-driving cars, aviation, and factory automation: when the mental load is reduced, cognitive capacity will also reduce. That’s all fine and good if the automation continues to work but if the human needs to intervene they now lack the skills and attention required to do so.
Furthermore, how can someone ever now gain the skills required if the automation is doing all the work? The pain of learning a new skill, over thousands and thousands of hours, is what gives people the ability to step in when automation fails.
So, more concretely, how does a junior developer “vibe coding” their first app develop their understanding and appreciation (“taste”) for software architecture, coding standards, and principles?
Corollary 2: Does it even matter?
There’s also the argument that perhaps all the knowledge we’ve gained doesn’t really matter any more because the rules have been changed.
Why DRY-up your code if it doesn’t have to fit in the mind of a single developer? Why focus on readability if no humans will ever read this code? Why focus on security or maintenance if this is only destined to be used for a short period of time. Hell, why bother with maintenance at all if re-writing an app from scratch isn’t much more work than trying to convince an AI to upgrade it?
I don’t know. It’s interesting to question our assumptions built up over years of experience and whether they still hold true.
Don’t Repeat Yourself is a good practice for humans because it means we have a single place to make a change to a system’s behaviour. Does it still make sense when AI-enabled code editors can do smart find & replace easily? Is it better to repeat ourselves so the AI has more context and we don’t have the mental overhead of wondering where that variable or configuration is declared?
AI-generated code may be inefficient because it lacks a true understanding of the system but does that matter? My spicy take is that, outside of a specific sector like developers, users are rarely willing to pay for performance improvements if it comes at an increased cost (either monetary or opportunity in the guise of reduced features).
Likewise, users may prefer a human to answer their support requests but aren’t willing to pay for such support, especially the costs of staffing one 24x7 to a capacity that will answer your query in a reasonable time. They may not love AI bots but they may prefer a cheaper cost service, with 24x7 support, and instantaneous answers—that are mostly right.
I can’t help feeling like much of what we’ve learned about our craft does still matter but we need to keep an open mind for assumptions we’re blindly making. AI isn’t even particularly good right now. It’s frustrating to work with, with frequent mistakes, and poor reasoning. And whilst it will get better, I don’t think it’s going to get expert human-level better and that means humans will still need to step in. We will still be in the loop, so many of our practices and skills are still going to be valuable—though harder to come by and maintain.
Corollary 3: Easy things have zero value
You can now vibe code a new course platform in a weekend but that doesn’t mean it has any value. At least, nothing like the same value that course platforms built with hundreds of person-years of effort have had over the past decade.
Because hard things are valuable but easy things aren’t. I will not be move from this viewpoint that all good and worthwhile things are hard and take time.
If there was a pill you could take today for €99 that would turn you into a national-level competitive triathlete, with no training, then the value of being a national-level triathlete drops to roughly zero. When everyone can do it then there’s no prestige or value in doing it.
I received a 3D printer for Christmas and it’s interesting how quickly the value of anything made of plastic has basically dropped to zero. When I need a small gift to mark an occasion, I can easily print something. Since they’re not yet commonplace, I can get away with printing a very nice flower pot in a green metallic filament for my mother on Mother’s Day. But not mistake: the value is not in the plastic pot, or in my ability to print it. It’s in designer and the printer manufacturer.
When an AI vendor proudly promotes that a new task can be accomplished with AI, without requiring years of experience, knowledge and training, then our response should not be “that’s amazing!”. It should be “that task no longer has value”. They have not created opportunity, they’ve destroyed and captured value.
With the release last week of OpenAI’s GPT-4o image generator, the value of Studio Ghibli-style family portraits has fallen to zero. Perhaps even less than zero because they’re so quickly passé that very quickly no one wants to be associated with them. They were a thing for about 36 hours.
This is how it goes time and time again: AI gains a new capability and for a brief moment you can exercise some temporal arbitrage by receiving the high value attributed to that creation the past, in exchange for the near-zero effort now required. And then the market corrects itself, supply skyrockets, and the demand and value drops to zero.
We’ve seen this pattern that the AI tools move in, makes a hard task incredibly easy, which completely destroys any value to the work, and then captures 90% of the remaining value. Vibe coding a SaaS app in a weekend doesn’t make that app valuable; it makes the AI tools that created it valuable.
This is a large-scale value capture across numerous industries, destroying value and concentrating what’s left into a small number of AI companies.
Corollary 4: Seek out new value
So, how do you compete with vibe-coded apps? Or AI-written books? Or auto-summarised news feeds? Or customised AI-chat bots loaded with a creator’s content? Or meme images in any style you want?
That’s the main question we should all be asking. When the AI-Cthulhu comes for your work, where do you go to survive and earn a living? You have to seek out the things it can’t do and the things it won’t do.
Think about all the reasons you aren’t going to pay for a vibe-coded app… it probably has more bugs, it’s probably insecure, no person actually understands the code, it likely has a very short shelf life, poor maintenance, etc. So build an app which is crafted with care and attention, and knowledge, and a deep deep appreciation and understanding of the user. Care more, not less—and you can still use AI to build but not depend on it.
As AI-generated images flood our media, leaning away from them is where the value is: creating your own paintings, your own drawings, your own unique photos. Stock photos had already had this effect of de-valuing many images: there isn’t really much room in the world for more photos of Big Ben, or the Cliffs of Moher, or any other well known landmark. Unless you could capture a unique event, convey an emotion in a unique way, or make it your own by including real people in the image. A stock photo of a beach in Mallorca is worthless; a family selfie from your holiday together is priceless. Or, even better, eliminate images altogether and lean heavily into other visual experiences like typography and animation.
These days, or very soon, anyone can build a customised AI-chat bot trained on their social media, blog posts, videos etc. For just $10/mo, you’ll be able to chat to an expert! What was previously hundreds or thousands of Euros in value has been reduced to pennies. In the new world, a more valuable proposition is an actual hour of that expert’s brain engaged on your problem. When everyone has an AI-chat bot, not having one makes you valuable. Or prioritise in-person experiences. Or build a brand around you, your personality, and your uniqueness.
I’m not an investor but I think the areas best poised for the future would be people (and relationships), places, and physical goods. Things like high-quality user-focused apps, anything built around a lasting relationship, Enterprise sales, travel, in-person events, live training, personal branding, housing, construction, mechanical maintenance, crafts like wood or ceramics… and AI (look, it’s either run away from Cthulhu or embrace him).
It’s not so much “skate to where the puck is going” as “skate to anywhere Cthulhu isn’t”