M.V. Baks

Alter modernism - where technology meets nature

You Don’t Need to Understand AI — You Need to Use It

25-03-2026 11:00 RSS

You see the potential of AI agents, but getting started feels overwhelming.


AI tools promise a lot. They are presented as ways to speed up your work, bring structure to chaos, and take over repetitive tasks so you can focus on what actually matters. And in a way, that promise is real. The first time you use them, you feel it immediately. An answer arrives faster than expected, an idea unfolds while you are still typing. There is something there that pulls you in.


But after those first moments, something begins to shift.


You notice that it does not always work the same way. What worked yesterday feels different today. You adjust your prompt, try another tool, search for a better way to get the same result. Slowly, the experience moves from discovery to hesitation. Not in a dramatic way, but just enough to hold you back. It becomes something you return to occasionally, but not something you truly rely on.


What sits underneath that is rarely the technology itself. In most cases, the tools work just fine. The problem is in how they are used and how they are set up. Without a clear structure, everything remains fragmented. There is no flow connecting one action to the next, no system that creates consistency. As a result, everything depends on the moment, on how well a prompt is written, on how much attention you give it. There is no real foundation.


As soon as you try to use it more seriously, the questions begin to change. It is no longer just about what it can do, but whether you can trust it. You start wondering what happens to your data, how safe your setup is, and whether the output is reliable when it actually matters. These are not questions you solve with a better prompt. They require an understanding of how the system itself is built.


There is also another layer. Not everyone is technical, and not everyone wants to be. Concepts like APIs, integrations, and security quickly become barriers. Yet these are exactly the elements that determine whether something is stable and safe to use. Without that foundation, it becomes difficult to build something you can confidently integrate into your daily work.


And that is where it often goes wrong. Not because people do not understand it, but because they have never seen a clear, working system. The technology is there, but the way it is set up prevents it from reaching its full potential. AI remains something you experiment with, instead of something that actually works for you.


In the end, the difference is not in what the tools can do, but in how they come together. Without structure, everything stays scattered. With the right setup, something shifts. AI stops being a tool you occasionally use and becomes an extension of your work — something that does not just respond, but actively supports you.

There comes a moment when you realize it is not about another tool.


Not about a better prompt.

Not about the latest AI that just launched.


But something else.


What is missing is not a lack of tools, but the way the techniques are applied. There is no longer any cohesion, and both overview and control are lost.

AI is often used as if it were a separate assistant. Something you open when you need it, ask a question, and then close again. But in that way, it remains reactive. It waits for you, instead of moving with you. You are chatting, not building your own practice.


And that is where the difference lies.


The power of an AI agent is not in the answer it gives, but in the place it takes within what you do. When that place is missing, it remains a loose interaction. Something that helps in the moment, but does not build anything. There is no continuity, no repetition, no depth. Only isolated moments of use.


But when that begins to shift — when an agent does not just respond, but takes on a role — the experience changes. It becomes something that returns. Something that moves with your work, instead of something you have to reach for every time.


That point is rarely reached, not because it is complex, but because it is rarely made visible. It often stays stuck in tools, in possibilities, in everything that could be, without showing how it comes together into something that actually works.

And because of that, it remains diffuse.


You feel the potential is there, but it keeps slipping away. As if you almost have hold of something, but it never fully takes shape.


While the moment it does come together is surprisingly quiet.


Not bigger. Not more complex.

But clearer.


You notice things start to flow without you having to constantly manage them. That a form begins to emerge that you can trust. Not because everything is perfect, but because it is set up in a way that makes sense.


And that is where the role of AI shifts.


It is no longer something you experiment with.

It becomes part of how you work.


And in that moment, space opens up.


Space to spend less time managing tools,

and more time on what you actually wanted to do in the first place.

An AI agent is, at its core, not much more than a piece of software that can act on your behalf.


It does not just respond to a question, but operates within a small defined context. It remembers, retrieves, processes, and returns — not as a one-off interaction, but as part of something ongoing. That is what makes the difference. It is not the intelligence alone, but the continuity.


Instead of asking something and starting over every time, an agent carries a thread. It becomes part of your workflow, not just a tool you visit.

The name “Lobster” is not accidental.


It comes from the idea of long-living, persistent systems — something that does not disappear after a single interaction, but remains, grows, and adapts over time. The reference can be traced back to speculative futures described in Accelerando by Charles Stross, where autonomous agents evolve alongside human intention, as well as conversations around long-term thinking and system design in the Moonshot Podcast.

I know about myself that I am an early adopter.


I get curious about new technology. Not because it is new, but because it opens something. A different way of working, a different way of looking at what is possible. That is how I started with AI agents, somewhere around mid-2024. At first cautiously, searching. Trying a lot, leaving things behind, picking them up again.


Until the moment it started to click.


With the arrival of OpenClaw, something fell into place. Not because it suddenly became magic, but because the separate pieces came together. What once felt fragmented gained structure. What used to take time started to give time back — the moment everything shifted for me.


We have seen this before. When computers first entered the workplace, the way we worked changed fundamentally. Not immediately visible to everyone, but inevitable for those in the middle of it. What started as a tool slowly became the standard. And before you realized it, it was no longer a choice, but a requirement.


With AI, something similar is happening now. Only bigger. We have not yet seen the ceiling of what it can become. This is bigger than the arrival of the internet.


What now still feels like an experiment, something you occasionally look at, will gradually shift into something that becomes woven into everything you do. Not through a sudden change, but by slowly embedding itself into your processes. First small, almost unnoticeable. Until one day it becomes so natural that you start wondering how you ever worked without it.


It is not a loud, explicit decision, but a quiet one. The choice to move with it while it is still taking shape, or to wait until everything is fully defined. But waiting in moments like this is rarely neutral. The world keeps moving, whether you do or not.

Maybe you recognize it.


That feeling that something is there, just out of reach. Not because you do not understand it, but because it has not yet taken shape. Because it stays somewhere between observing and doing.


And that is exactly where this workshop begins.


Not as an explanation of what AI is, but as a moment where it starts working for you. Where something that felt scattered and intangible comes together into something that makes sense. Something you do not just see, but experience as you build it.


In those three hours, something fundamental happens. You are no longer working with isolated tools or disconnected prompts, but begin to see how it comes together. How an AI agent does not just respond, but takes a place within what you do. Not as an experiment, but as part of your practice.


The process is calm. Without rush, without noise. Because there needs to be space to understand what you are doing, and why it works. Not to master everything, but to make it your own.


And maybe that is where the shift happens.


Not in what is built, but in the trust that emerges. That you are no longer dependent on what you happen to come across, but that you have something you can continue to use, shape, and grow over time.


It is not an endpoint.


It is a beginning that makes sense.



If you feel that this is your moment, now is the time to take that step. Seats are limited, and once they are filled, they are gone.


👉 Sign up