A couple of years into the AI shift, the hole between engineers just isn’t expertise. It’s coordination: shared norms and a shared language for the way AI matches into on a regular basis engineering work. Some groups are already getting actual worth. They’ve moved past one-off experiments and began constructing repeatable methods of working with AI. Others haven’t, even when the motivation is there. The reason being typically easy: The price of orientation has exploded. The panorama is saturated with instruments and recommendation, and it’s onerous to know what issues, the place to begin, and what “good” seems like when you care about manufacturing realities.
The lacking map
What’s lacking is a shared reference mannequin. Not one other instrument. A map. Which engineering actions can AI responsibly help? What does high quality imply for these outputs? What adjustments when a part of the workflow turns into probabilistic? And what guardrails preserve integration protected, observable, and accountable? With out that map, it’s simple to drown in novelty, and simple to confuse widespread experimentation with dependable integration. Groups with the least time, finances, and native help pay the very best worth, and the hole compounds.
That hole is now seen on the organizational stage. Extra organizations try to show AI into enterprise worth, and the distinction between hype and integration is displaying up in follow. It’s simple to ship spectacular demos. It’s a lot tougher to make AI-assisted work dependable beneath real-world constraints: measurable high quality, controllable failure modes, clear information boundaries, operational possession, and predictable price and latency. That is the place engineering self-discipline issues most. AI doesn’t take away the necessity for it; it amplifies the price of lacking it. The query is how we transfer from scattered experimentation to built-in follow with out burning cycles on instrument churn. To do this at scale, we want shared scaffolding: a public mannequin and shared language for what “good” seems like in AI-native engineering.
We now have seen why this type of shared scaffolding issues earlier than. Within the early web period, promise and noise moved quicker than requirements and shared follow. What made the web sturdy was not a single vendor or methodology however a cultural infrastructure: open information sharing, international collaboration, and shared language that made practices comparable and teachable. AI-native engineering wants the identical form of cultural infrastructure, as a result of integration solely scales when the trade can coordinate on what “good” means. AI doesn’t take away the necessity for cautious engineering. Quite the opposite, it punishes the absence of it.
A public scaffold for AI-native engineering
Within the second half of 2025, I started to note rising unease amongst engineers I labored with and associates in IT. There was a transparent sense that AI would change our work in profound methods, however far much less readability on what that truly meant for an individual’s function, expertise, and every day follow. There was no scarcity of trainings, guides, blogs, or instruments, however the extra assets appeared, the tougher it grew to become to guage what was related, what was helpful, and the place to start. It felt overwhelming. How have you learnt which subjects really matter to you when out of the blue every little thing is labeled AI? How do you progress from hype to helpful integration?
I used to be feeling a lot of that very same uncertainty myself. I used to be attempting to make sense of the shift too, and for some time I feel I used to be ready for a clearer construction to emerge from elsewhere. It was solely when associates began reaching out to me for assist and steerage that I noticed I may need one thing significant to contribute. I don’t think about myself an AI knowledgeable. I’m discovering my manner by these adjustments similar to many different engineers. However over time, I had turn out to be recognized for my work in IT workforce improvement, ability and functionality frameworks, and engineering excellence and enablement. I understand how to assist folks navigate complexity in a sensible and sustainable manner, and I take pleasure in bringing readability to chaos.
That’s what led me to begin engaged on the AI Flower as a interest venture in early October 2025, constructing on frameworks and strategies I already had expertise with.
Once I started sharing it with associates in IT to collect suggestions, I noticed how a lot it resonated. It helped them make sense of the complexity round AI, suppose extra clearly about their very own upskilling, and start shaping AI adoption methods of their very own. That’s once I realized this informal experiment held actual worth, and determined I needed to publish it so it might assist empower different engineers and IT organizations in the identical manner it had helped my associates.
With the AI Flower, I’m providing a public scaffold for AI-native engineering work: a shared reference mannequin that helps engineers, groups, and organizations undertake and combine AI sustainably and reliably. It’s meant to steer and manage the dialog round AI-assisted engineering, and to ask focused suggestions on what breaks, what’s lacking, and what “good” ought to imply in actual manufacturing contexts. It’s not meant to be excellent. It’s meant to be helpful, freely obtainable, open to contribution, and formed by the strongest useful resource our trade has: collective intelligence.
Open information sharing and collaboration can’t be optionally available. If AI is turning into a part of how we design, construct, function, safe, and govern methods, we want greater than instruments and enthusiasm. Many people work on methods folks depend on each day. When these methods fail, the impression is actual. That’s why we owe it to the individuals who depend upon these methods to do that with care, and why we gained’t get there in isolation. We’d like the trade, globally, to converge on shared requirements for reliable follow.

Concerning the AI Flower
The AI Flower maps the core actions that make up engineering work throughout the principle engineering disciplines. For every exercise, it defines what beauty like, based mostly on practices that ought to already really feel acquainted to engineers. It then helps folks discover how AI can help these actions in follow, offering steerage on the best way to start utilizing AI in that work, sharing hyperlinks to helpful studying assets, and outlining the principle dangers, trade-offs, and mitigations.
However the AI panorama is altering shortly. This activity-based strategy helps engineers perceive how AI can help core engineering duties, the place dangers could come up, and the best way to begin constructing sensible expertise. However by itself, it isn’t sufficient as a long-term mannequin for AI adoption.
As AI capabilities evolve, many engineering actions will turn out to be extra abstracted, extra automated, or absorbed into the infrastructure layer. Which means engineers might want to do greater than discover ways to use AI inside as we speak’s actions. They may also must work with rising approaches comparable to context engineering and agentic workflows, that are already reshaping what we think about core engineering work. An idea I name the Talent Fossilization Mannequin captures that development. It exhibits how each engineering expertise and AI-related expertise evolve over time, and the way a few of them turn out to be much less seen as work strikes to the next stage of abstraction. Collectively, the AI Flower and the Talent Fossilization Mannequin are supposed to assist engineers keep adaptable as the sphere continues to shift.
The primary goal of the AI Flower is to assist engineers discover their manner by these fast adjustments and develop with them. Whereas I present content material for every part and exercise, the actual worth lies within the framework and construction itself. To turn out to be really worthwhile, it should want the perception, care, and contribution of engineers throughout disciplines, views, and areas.
I genuinely consider the AI Flower, as an open and freely obtainable framework, can function a scaffold for that work. That is my contribution to a altering trade. However it should solely be helpful—it should solely “bloom”—if the neighborhood assessments it, challenges it, and improves it over time.
And if any trade can flip open critique and contribution into shared requirements at a world scale, it’s ours, isn’t it?
Be part of me at AI Codecon to study extra
If the AI Flower resonates and also you need the complete walkthrough, I’ll be presenting it at O’Reilly’s upcoming AI Codecon. (Registration is free and open to all.)
If you happen to’re involved about how shortly AI engineering patterns are evolving, that concern is legitimate. We’ve already seen the middle of gravity shift from advert hoc immediate work, to context engineering, to more and more agentic workflows, and there’s extra coming. A core design objective of the AI Flower is to remain steady throughout these shifts by specializing in underlying capabilities moderately than particular strategies. I’ll go deeper on that stability precept, together with the Talent Fossilization mannequin, at AI Codecon as effectively.
