“I prompt, therefore I am.”
Working with code in an omnipresent age of Artificial Intelligence raises some serious questions for many of us. Do we still need to think for ourselves? Do we still need to build? What is the worth of “Human Intelligence” (Tim Rodenbröker) in a technology-driven world?
A lot of the current debate is dominated by fear. Did we release a genie out of the bottle that has gotten out of control? Did we open the box of Pandora? It seems like we woke up in a helpless state of shock like Goethe’s sorcerer’s apprentice and all we have left to say is: “The spirits that I summoned / I now cannot rid myself of again.”
Machines making their own decisions. Large language models generating ideas, text, even images that are increasingly indistinguishable from reality.
Like so many times before with human invention, we’ve built something that enables us to do unimaginable things while threatening to render us useless at the same time.
So, what can we do about it?
When paralyzed by fear it is always a good idea to take a step back and change perspective. “Fear is the mind killer” (Frank Herbert).
Sigmund Freud described human beings as prosthetic gods (”Prothesengott”). To his mind technology augments our bodies and unlocks godlike possibilities to us. Fire was one of the first tools that heralded our path of controlling our environment. Is AI the last consequent step of this development? Have we developed something than can finally augment our minds?
Of course we cannot ignore the risks of working with AI. Like in any well-written magic system, everything with power has its price: The waste of resources, the problematic training with copyrighted knowledge, the uncertainty of the worth of original human labor and thought in the future.
How can we navigate this minefield and find responsible and productive ways of collaborating with Artificial Intelligence?
At least of this I am certain: of course we do! And for a very simple reason. We need to be able to ask the right questions.
Even in 2026 there is no way around doing the work and learning the basics. How could you instruct someone to build a house for you if you have no concept of water supply, electricity and statics? This is one of the things I learned quickly: AI may seem powerful to absolute beginners but its real potential unfolds in the hands of experts.
Here is a typical situation: A student generates, hallucinates, a piece of code. It kind of works, great! Are we done yet? Well, no.
Actually we would like to adapt it a little. Tweak it into a new direction. Make it make new things. The more we go along this path the more it dawns on us, that what’s really missing is a basic understanding of how code works, what it can do and how it needs to be structured to do those things. What we want as creatives is something that we have control over, something that we can develop further, customize, rethink, repurpose and bend to our own will.
We want to be able to keep systems at human scale. Think of a blacksmith working on a piece of red hot liquid metal. In his mind there is a clear image of the shape he wants to bend this into. Through experience, hard work, training and a deep understanding of his material he gained the necessary imagination that enables him to transform a simple piece of metal into the most elaborated tool. How do you want to be a great painter if you never learned how to sketch?
The better you understand your material, code in our case, the more freedom you gain. We need to be able to:
In other words, with experience, you'll learn how to ask better questions. Which really is what prompting boils down to.
But what is so interesting about code at all? In a sense, it is a bit like magic. Like any sufficiently advanced technology it becomes indistinguishable from magic (Arthur C. Clarke).
To me it really is a basic structural language. It can describe the world and holds the potential to manipulate and reshape it to our will. It grants us access to the underlying technologies of our modern environment. Computers, machines, even our daily infrastructure. It forces us into new modes of thinking. It teaches us to think in conditions, logic and systems.
It gives us literacy over how things work.
What can this look like in practice? My main driver is curiosity. It helps me keep a positive, constructive view on the world. The side effect is empathy towards things, ideas, people, technologies. It helps me see potential and opportunities, without ignoring certain dangers.
First of all, we need to get a clear idea of what AI is, and what it is not. There are a lot of misconceptions floating around.
In simple terms: it is very good at pattern recognition. It is a tool trained on large sets of data and it gives probable answers. Answers that statistically please us, the ones asking. It predicts patterns from data.
It does not think. It does not understand. It does not have a concept of reality. It never made any sensory experience of the real world that would help it understand it the way we do.
It is simply good at analyzing and imitating language. Sometimes so convincingly that we are deceived of assuming an animate opposite. This is rooted of course in a long tradition of humans anthropomorphizing inanimate objects.
It is safe to assume that it was trained on giant amounts of text, data, code, written by the sweat and tears of real humans. Of course the way this was done may be one of the biggest thefts in human history. Finding ethical ways of achieving this is a task we cannot simply skip. But still, this is an incredibly powerful condensation of human knowledge. This is a real opportunity.
After all, code is language, and generative AI is particularly good at imitating it. Over time, I saw three modes of collaboration evolve in my work.
Mode 1:
Think for yourself first, then ask for help with the implementation.
Start by sketching the logic by yourself. What is the thing you want to achieve? How could it look and feel like? What are the basic puzzle pieces you would need for this to work? Where is room for experimentation and play? What is unclear, where do you really need help?
A simple task for students can look like this: write down what you want your code to achieve in your own words. Come up with a system of instructions that lead to the desired output. Make sketches with pen and paper. Forcing yourself to make it concrete leads you to a variety of things you have to think about and make decisions.
Then, as a last step, work with the AI. Use it as a translator to transfer your own words into code syntax. Use it for debugging and cleaning up dead ends. This is a very focused way of collaboration that leaves the creative part up to you.
Mode 2:
Build a catalog of building bricks, then start combining.
Again: Don’t skip the basics! Use AI to internalize basic concepts instead. Build small reusable prototypes that you can fully understand and control. Deconstruct the code until you understand every bit of it. Use AI as a collaborative analyst that helps you identify how things work and interact. Rework generated code into your own vocabulary. Analyze work by others and abstract useful parts into your own atoms.
Get a grip of loops, functions, conditions, arrays and structure. After a while you’ll be able to combine these bricks into larger puzzles pieces.
They will become your vocabulary enabling you to articulate sentences, trains of thought, essays even or full blown books.
Mode 3:
Push the comfort zone
Sometimes I find myself pushing forward so much that I fear to get lost. How far can I still go? Where are the limits? What kind of things happen that seemed impossible before? If the point of frustration and loss of control is reached, take a step back.
Go back to systematic observation. Isolate what works and what doesn’t. If you found a new thing that works but you have no idea why, study it. Break it down to its core until you have a new building block for your collection. Take back control, start reorganizing and build something new again that you can control but with the added insight of your previous exploration.
All these modes are mixed, looped and iterated upon over and over again. They will form a working mode that enables you to work in systems on a human scale.
Practical note
Since I document most of my work in markdown format with Obsidian, using AI to write short minimal documentation on the side is really useful. You might be working on a bigger project and discover something small, that you want to reuse. Embedded AI can help you isolating these pieces and document them for you. Of course you should never skip re-reading, re-phrasing and simplifying the output to make sure its actually usable for you.
What do we learn from all this? For me it boils down to how we approach our collaboration with AI. In simple words, treat it as a learning opportunity!
AI can help us access code and internalize it as knowledge. It can teach us to think and work systematically. Relying on our original thoughts, emotion and intention, it can become a powerful tool to amplify our voice. It can help us find new paths, take on new perspectives and do things that seemed unimaginable before.
Push your limits, experiment, challenge yourself. Stay sceptical, stay in control.
Work modularly, isolate problems, document and internalize. Build a sketchbook of prototypes, building bricks and ideas. Build up a vocabulary and use it to express yourself freely.
(Change perspectives, be a designer, be a coder, be an artist.) Learn to ask better questions!
This text wan’t written with AI.
It made some good suggestions on structure, but the tone and wording came out bland and meaningless.
As a general rule of thumb: it’s fine to make AI part of your process but it should never dictate the final outcome. Human original thought and voice are what capture our imagination. After all we respond to emotion rooted in real human experience.