Perspective
The shift.
Every previous technology wave reorganised physical work. The factory. The assembly line. The warehouse. Each one changed who did what kind of labour and how organisations structured themselves around it.
This one reorganises cognitive work — the thinking, decision-making, and knowledge that organisations are built on. Intelligence used to be locked inside people. If you needed legal reasoning, you hired a lawyer. If you needed analysis, you hired an analyst. Organising intelligence was the entire point of firms, hierarchies, and job titles.
Once you can encode cognitive tasks as modular units, that architecture starts to dissolve.
Most of what is being sold as AI transformation does not engage with this. It makes existing work faster. It does not make it different. The marketing team that uses AI to write drafts still has the same process. The consulting firm that uses AI for research still bills the same way. The operations team that automates a workflow still depends on someone to maintain the automation.
Speed is valuable. But speed is not structural change.
Structural change is when the work itself is different. When roles look different. When an organisation operates in a way that was not possible before. That is what the shift actually looks like, and it starts with one question.
The interface.
Between every person and every AI system there is an interface. That interface determines what the person needs to know to use the system effectively.
If the interface requires you to build workflows, write prompts, configure data sources, or understand system architecture, it has failed. You are not using AI. You are operating a system. And when that system breaks, you cannot fix it.
The right interface requires only your expertise. You describe what you need using the language of your domain. The system handles everything else.
A lawyer should be able to say 'review this contract against our standard terms and flag anything non-standard.' Not configure a RAG pipeline, select an embedding model, and write a system prompt. A consultant should be able to say 'generate the assessment report from my site visit notes in our format.' Not build a template in n8n with conditional logic and API connections.
That is the line. When you cross it, the person is no longer doing their job. They are doing someone else's.
Two failure modes.
Most companies respond to AI wrong. Two patterns repeat everywhere.
The parallel tool. AI deployed alongside existing workflows. People use it sometimes. The incentives, processes, and measurement systems still point at the old way. The tool becomes shelfware.
The optimised process. AI makes an existing process faster. Real gains, but bounded. The underlying structure — who decides what, how the organisation learns — hasn't changed.
Both ask the wrong question. The first asks 'can people use this tool?' The second asks 'how do we do this faster?' Neither asks: should this work exist at all, and if so, who should be doing it?
Most knowledge work has become artifact production. Status updates, slide decks, strategy documents, meeting notes, quarterly roadmaps. AI can now produce all of it in seconds, which forces an uncomfortable question: if a machine can write it and a machine can read it, why does it exist? The 10x knowledge worker is not the one who produces ten times more output with AI. It is the one who eliminates most of the output entirely and focuses on the work that actually creates value.
What is friction displacement?
Friction displacement is when AI adoption moves the bottleneck rather than dissolving it. The dependency changes shape, but it does not go away.
The marketing person who used to wait for a data analyst now waits for someone to debug the automation. The consultant whose methodology was locked in their head now has it locked in a system they cannot modify.
You set up a custom GPT that worked well for a month. Then the model updated and the outputs changed. You do not know why and you do not know how to fix it. The time you saved in month one, you are spending on debugging in month two. The friction moved.
Friction displacement happens whenever the interface between you and the AI requires knowledge you should not need. It is the central failure mode of AI adoption today.
Where we are now.
Workflow automations, RAG systems, copilot configurations, custom GPTs. These are real tools solving real problems. They represent where the technology is right now. They work. But they encode today's level of abstraction, and that level is rising fast.
The workflow you built in n8n today might be a single prompt in a tool that does not exist yet. The custom GPT you configured might be replaced by a system that does not need configuration.
The question is not whether to use these tools. Use them. The question is whether the skills you are building while using them will still matter in two years.
There is a simple test. Does this tool require you to think like a builder, or like a domain expert? Tools where you configure triggers, nodes, and connections require builder thinking. Tools where you describe what you need and the system handles implementation require domain thinking. That is the line.
WHAT ACTUALLY CHANGES
Follow one person through the shift.
Every role is a bundle of cognitive tasks stitched together by historical accident. AI unbundles them. Execution shifts to the machine. What remains for humans is judgment, direction, and orchestration.
Now imagine everyone makes this shift.
Not one person with a copilot. Everyone, working differently, simultaneously. Three things happen.
Coordination layers dissolve. Organisations have layers because information needs to be routed, synthesised, and checked at each level. A junior analyst gathers data and passes it to a senior analyst who interprets it and passes it to a manager who makes a decision. Three roles that exist partly because of information flow. When AI handles the gathering and the first-pass interpretation, the senior analyst reviews exceptions and the manager makes decisions on better information, faster. The layer doesn't disappear because anyone was fired. It disappears because the work that justified it was absorbed.
Silos become permeable. Departments exist partly because different functions require different specialised knowledge. AI makes that knowledge more accessible. The marketing team that needed to request data analysis from the analytics team can do basic analysis themselves. The engineering team that needed a technical writer can generate first-draft documentation. The boundaries between functions become more permeable because AI handles the translation between domains.
The structure evolves continuously. Most companies that adopt AI add it to the existing structure. Same departments, same reporting lines, same meetings, just with AI tools. That misses the point. If AI changes what each person does and how information flows between people, the structure itself should change. Not as a one-time reorganisation but as a continuous evolution. The organisation that gets this right does not just move faster. It compounds. The systems get smarter the longer they run because the infrastructure captures what the organisation learns.
The hard part.
Most people's professional identity is built around their execution skills. When that skill becomes commoditised, the shift isn't just a capability change — it's an identity renegotiation. Organisations that ignore this will face resistance that looks like stubbornness but is actually grief.
The question is not whether AI creates capacity. It will. The question is whether the organisation has decided what to do with it before it arrives.
Five skills that translate.
Everything above describes what is changing. Here is what translates. These are the skills that will matter regardless of which tools win. They are domain skills, not technical skills. They compound over time. They do not depend on any specific tool. And they are what separate people who genuinely work with AI from people who have AI tools installed.
01 — Specification
Describing what needs to happen precisely enough that a system produces something good. Not 'write me a report' but 'write a three-page assessment in our standard format, using these site visit notes, referencing ISO 9001 where relevant, with recommendations prioritised by implementation cost.'
02 — Evaluation
Telling whether the output is actually good. Catching what looks right but is not. The AI draft reads well. But does it accurately reflect what you observed on site? Are the regulatory references current? Would you put your name on it?
03 — Taste
When everyone can generate, knowing which of the twelve drafts is actually good. Twelve AI-generated versions of the client email. Only one sounds like your firm. Knowing which one, and why, is taste.
04 — Second-order judgment
Not 'is this good?' but 'am I asking the right question?' The AI answered your question perfectly. But you asked the wrong question. Recognising that requires understanding the problem, not the tool.
05 — Orchestration
Coordinating what the system does, what humans do, and where the handoffs are. The AI handles research and first-draft. You handle client relationships and final judgment. The junior analyst handles data validation. Knowing who does what, and where the handoffs are, is orchestration.
The honest line.
The quality of judgment about whether to build, buy, or outsource — that's the core of the work. It's also the hardest to teach. Most teams don't have anyone who thinks clearly about that tradeoff.
So they hire a consultant. The consultant comes in, builds something, and leaves. The team learns nothing. The consultant becomes the dependency. The cycle repeats.
Or they try to build it themselves. They hire engineers. Engineers build what they think makes sense, not what the business needs. Months later, they have a system that works but nobody understands or can operate.
Or they buy a platform. The platform doesn't fit. They hire someone to configure it. The configurator becomes a specialist. The specialist becomes a bottleneck.
The only way to break the cycle is to build that muscle internally. To invest in the thinking, not just the doing. It's not fast. But it's the move that compounds. That is what our workshop is designed to start.
The real question.
Are you adding AI to the business you have, or asking what business you'd build if you were starting today?
The first — AI-Enhanced — adds features to existing workflows. Real gains, bounded ones. You move faster within the current structure. You optimise what exists.
The second — AI-First — forces you to rethink the product, the team structure, the economics. What roles exist? What does the org chart look like? What can you offer that was not possible before? It is harder. It is also where the structural advantage comes from.
The gap between those two grows every month. AI-First companies are being built right now, asking exactly this question. They do not have legacy structure to work around. They start from the cognitive task and build up.
The question is not whether you should be AI-First. Some organisations should not be. The question is whether you have made that choice deliberately, or whether you are AI-Enhanced by default because nobody asked the harder question.
What we believe.
AI that works is AI you can run yourself.
Your domain knowledge is the durable asset. The tools change. The vendors change. The cost structure changes. The regulatory environment changes. The only thing that stays is your understanding of your problem and your ability to evolve the capability as the landscape shifts.
We don't believe in selling you solutions. We believe in teaching you to build solutions.
We don't believe in creating dependency. We believe in building independence.
We don't believe in the tool as the product. We believe in the thinking as the product.
We don't believe that AI is a layer you add on top of work. We believe it's a fundamental way of reorganising work. Not more efficiency with the same structure. Different structure with different roles and different judgment points.
We don't believe that this works for every organisation. Some problems benefit from outsourcing. Some are too small to warrant investment. Some organisations prefer dependency. That's fine. But we won't pretend that's self-sufficiency.
Why now.
The tools are good enough. They were never good enough before. You could build a proof of concept. You couldn't build something a non-specialist could operate.
Now you can. A domain expert with no machine learning background can learn to specify, evaluate, and iterate on an AI-powered workflow. The tools have caught up to the requirement.
The only missing piece is the thinking. How to make the tradeoffs. How to know what's local optimisation vs. system thinking. How to build something that lasts.
That's where we come in.
This is what we work on.
Put AI to Work — Workshop
A focused session on your team's real work. Build agents, workflows, and tools that handle it, and walk out with the skills to keep building.
See the workshop →Not sure if a workshop is the right starting point?
We can help you figure it out.
Guillaume Picard
Founder of SmartAI Labs. Previously built AI systems at Apple, Electricity Maps, and Force Technology.
About →