Builders are doing unimaginable issues with AI. Instruments like Copilot, ChatGPT, and Claude have quickly turn into indispensable for builders, providing unprecedented velocity and effectivity in duties like writing code, debugging tough habits, producing assessments, and exploring unfamiliar libraries and frameworks. When it really works, it’s efficient, and it feels extremely satisfying.
However in case you’ve spent any actual time coding with AI, you’ve most likely hit some extent the place issues stall. You retain refining your immediate and adjusting your strategy, however the mannequin retains producing the identical sort of reply, simply phrased a bit of in another way every time, and returning slight variations on the identical incomplete resolution. It feels shut, however it’s not getting there. And worse, it’s not clear learn how to get again on monitor.
That second is acquainted to lots of people attempting to use AI in actual work. It’s what my current speak at O’Reilly’s AI Codecon occasion was all about.
During the last two years, whereas engaged on the newest version of Head First C#, I’ve been creating a brand new sort of studying path, one which helps builders get higher at each coding and utilizing AI. I name it Sens-AI, and it got here out of one thing I saved seeing:
There’s a studying hole with AI that’s creating actual challenges for people who find themselves nonetheless constructing their growth expertise.
My current O’Reilly Radar article “Bridging the AI Studying Hole” checked out what occurs when builders attempt to be taught AI and coding on the similar time. It’s not only a tooling drawback—it’s a considering drawback. Quite a lot of builders are figuring issues out by trial and error, and it grew to become clear to me that they wanted a greater option to transfer from improvising to really fixing issues.
From Vibe Coding to Drawback Fixing
Ask builders how they use AI, and plenty of will describe a sort of improvisational prompting technique: Give the mannequin a job, see what it returns, and nudge it towards one thing higher. It may be an efficient strategy as a result of it’s quick, fluid, and nearly easy when it really works.
That sample is widespread sufficient to have a reputation: vibe coding. It’s an incredible start line, and it really works as a result of it attracts on actual immediate engineering fundamentals—iterating, reacting to output, and refining primarily based on suggestions. However when one thing breaks, the code doesn’t behave as anticipated, or the AI retains rehashing the identical unhelpful solutions, it’s not all the time clear what to attempt subsequent. That’s when vibe coding begins to disintegrate.
Senior builders have a tendency to choose up AI extra shortly than junior ones, however that’s not a hard-and-fast rule. I’ve seen brand-new builders decide it up shortly, and I’ve seen skilled ones get caught. The distinction is in what they do subsequent. The individuals who succeed with AI are likely to cease and rethink: They determine what’s going unsuitable, step again to have a look at the issue, and reframe their immediate to offer the mannequin one thing higher to work with.

The Sens-AI Framework
As I began working extra carefully with builders who have been utilizing AI instruments to attempt to discover methods to assist them ramp up extra simply, I paid consideration to the place they have been getting caught, and I began noticing that the sample of an AI rehashing the identical “nearly there” strategies saved developing in coaching classes and actual initiatives. I noticed it occur in my very own work too. At first it felt like a bizarre quirk within the mannequin’s habits, however over time I noticed it was a sign: The AI had used up the context I’d given it. The sign tells us that we want a greater understanding of the issue, so we may give the mannequin the data it’s lacking. That realization was a turning level. As soon as I began being attentive to these breakdown moments, I started to see the identical root trigger throughout many builders’ experiences: not a flaw within the instruments however a scarcity of framing, context, or understanding that the AI couldn’t provide by itself.

Over time—and after a variety of testing, iteration, and suggestions from builders—I distilled the core of the Sens-AI studying path into 5 particular habits. They got here instantly from watching the place learners received caught, what sorts of questions they requested, and what helped them transfer ahead. These habits kind a framework that’s the mental basis behind how Head First C# teaches builders to work with AI:
- Context: Listening to what data you provide to the mannequin, attempting to determine what else it must know, and supplying it clearly. This contains code, feedback, construction, intent, and anything that helps the mannequin perceive what you’re attempting to do.
- Analysis: Actively utilizing AI and exterior sources to deepen your personal understanding of the issue. This implies operating examples, consulting documentation, and checking references to confirm what’s actually occurring.
- Drawback framing: Utilizing the data you’ve gathered to outline the issue extra clearly so the mannequin can reply extra usefully. This includes digging deeper into the issue you’re attempting to unravel, recognizing what the AI nonetheless must find out about it, and shaping your immediate to steer it in a extra productive route—and going again to do extra analysis while you understand that it wants extra context.
- Refining: Iterating your prompts intentionally. This isn’t about random tweaks; it’s about making focused adjustments primarily based on what the mannequin received proper and what it missed, and utilizing these outcomes to information the subsequent step.
- Important considering: Judging the standard of AI output moderately than simply merely accepting it. Does the suggestion make sense? Is it right, related, believable? This behavior is particularly necessary as a result of it helps builders keep away from the lure of trusting confident-sounding solutions that don’t really work.
These habits let builders get extra out of AI whereas conserving management over the route of their work.
From Caught to Solved: Getting Higher Outcomes from AI
I’ve watched a variety of builders use instruments like Copilot and ChatGPT—throughout coaching classes, in hands-on workout routines, and after they’ve requested me instantly for assist. What stood out to me was how typically they assumed the AI had achieved a foul job. In actuality, the immediate simply didn’t embrace the data the mannequin wanted to unravel the issue. Nobody had proven them learn how to provide the precise context. That’s what the 5 Sens-AI habits are designed to deal with: not by handing builders a guidelines however by serving to them construct a psychological mannequin for learn how to work with AI extra successfully.
In my AI Codecon speak, I shared a narrative about my colleague Luis, a really skilled developer with over three a long time of coding expertise. He’s a seasoned engineer and a sophisticated AI consumer who builds content material for coaching different builders, works with massive language fashions instantly, makes use of subtle prompting methods, and has constructed AI-based evaluation instruments.
Luis was constructing a desktop wrapper for a React app utilizing Tauri, a Rust-based toolkit. He pulled in each Copilot and ChatGPT, cross-checking output, exploring options, and attempting totally different approaches. However the code nonetheless wasn’t working.
Every AI suggestion appeared to repair a part of the issue however break one other half. The mannequin saved providing barely totally different variations of the identical incomplete resolution, by no means fairly resolving the difficulty. For some time, he vibe-coded by way of it, adjusting the immediate and attempting once more to see if a small nudge would assist, however the solutions saved circling the identical spot. Finally, he realized the AI had run out of context and adjusted his strategy. He stepped again, did some targeted analysis to raised perceive what the AI was attempting (and failing) to do, and utilized the identical habits I emphasize within the Sens-AI framework.
That shift modified the result. As soon as he understood the sample the AI was attempting to make use of, he may information it. He reframed his immediate, added extra context, and at last began getting strategies that labored. The strategies solely began working as soon as Luis gave the mannequin the lacking items it wanted to make sense of the issue.
Making use of the Sens-AI Framework: A Actual-World Instance
Earlier than I developed the Sens-AI framework, I bumped into an issue that later grew to become a textbook case for it. I used to be curious whether or not COBOL, a decades-old language developed for mainframes that I had by no means used earlier than however needed to be taught extra about, may deal with the essential mechanics of an interactive recreation. So I did some experimental vibe coding to construct a easy terminal app that will let the consumer transfer an asterisk across the display utilizing the W/A/S/D keys. It was a bizarre little aspect challenge—I simply needed to see if I may make COBOL do one thing it was by no means actually meant for, and be taught one thing about it alongside the way in which.
The preliminary AI-generated code compiled and ran simply superb, and at first I made some progress. I used to be in a position to get it to clear the display, draw the asterisk in the precise place, deal with uncooked keyboard enter that didn’t require the consumer to press Enter, and get previous some preliminary bugs that triggered a variety of flickering.
However as soon as I hit a extra refined bug—the place ANSI escape codes like ";10H"
have been printing actually as a substitute of controlling the cursor—ChatGPT received caught. I’d describe the issue, and it could generate a barely totally different model of the identical reply every time. One suggestion used totally different variable names. One other modified the order of operations. A couple of tried to reformat the STRING
assertion. However none of them addressed the foundation trigger.

The sample was all the time the identical: slight code rewrites that regarded believable however didn’t really change the habits. That’s what a rehash loop appears to be like like. The AI wasn’t giving me worse solutions—it was simply circling, caught on the identical conceptual concept. So I did what many builders do: I assumed the AI simply couldn’t reply my query and moved on to a different drawback.
On the time, I didn’t acknowledge the rehash loop for what it was. I assumed ChatGPT simply didn’t know the reply and gave up. However revisiting the challenge after creating the Sens-AI framework, I noticed the entire trade in a brand new gentle. The rehash loop was a sign that the AI wanted extra context. It received caught as a result of I hadn’t advised it what it wanted to know.
After I began engaged on the framework, I remembered this outdated failure and thought it’d be an ideal take a look at case. Now I had a set of steps that I may comply with:
- First, I acknowledged that the AI had run out of context. The mannequin wasn’t failing randomly—it was repeating itself as a result of it didn’t perceive what I used to be asking it to do.
- Subsequent, I did some focused analysis. I brushed up on ANSI escape codes and began studying the AI’s earlier explanations extra fastidiously. That’s once I seen a element I’d skimmed previous the primary time whereas vibe coding: After I went again by way of the AI rationalization of the code that it generated, I noticed that the
PIC ZZ
COBOL syntax defines a numeric-edited subject. I suspected that might probably trigger it to introduce main areas into strings and puzzled if that might break an escape sequence. - Then I reframed the issue. I opened a brand new chat and defined what I used to be attempting to construct, what I used to be seeing, and what I suspected. I advised the AI I’d seen it was circling the identical resolution and handled that as a sign that we have been lacking one thing elementary. I additionally advised it that I’d achieved some analysis and had three leads I suspected have been associated: how COBOL shows a number of objects in sequence, how terminal escape codes have to be formatted, and the way spacing in numeric fields could be corrupting the output. The immediate didn’t present solutions; it simply gave some potential analysis areas for the AI to research. That gave it what it wanted to search out the extra context it wanted to interrupt out of the rehash loop.
- As soon as the mannequin was unstuck, I refined my immediate. I requested follow-up inquiries to make clear precisely what the output ought to appear like and learn how to assemble the strings extra reliably. I wasn’t simply on the lookout for a repair—I used to be guiding the mannequin towards a greater strategy.
- And most of all, I used important considering. I learn the solutions carefully, in contrast them to what I already knew, and determined what to attempt primarily based on what really made sense. The reason checked out. I applied the repair, and this system labored.

As soon as I took the time to know the issue—and did simply sufficient analysis to offer the AI a number of hints about what context it was lacking—I used to be in a position to write a immediate that broke ChatGPT out of the rehash loop, and it generated code that did precisely what I wanted. The generated code for the working COBOL app is accessible in this GitHub GIST.

Why These Habits Matter for New Builders
I constructed the Sens-AI studying path in Head First C# across the 5 habits within the framework. These habits aren’t checklists, scripts, or hard-and-fast guidelines. They’re methods of considering that assist individuals use AI extra productively—and so they don’t require years of expertise. I’ve seen new builders decide them up shortly, typically quicker than seasoned builders who didn’t understand they have been caught in shallow prompting loops.
The important thing perception into these habits got here to me once I was updating the coding workout routines in the latest version of Head First C#. I take a look at the workout routines utilizing AI by pasting the directions and starter code into instruments like ChatGPT and Copilot. In the event that they produce the right resolution, which means I’ve given the mannequin sufficient data to unravel it—which suggests I’ve given readers sufficient data too. But when it fails to unravel the issue, one thing’s lacking from the train directions.
The method of utilizing AI to check the workout routines within the e-book jogged my memory of an issue I bumped into within the first version, again in 2007. One train saved tripping individuals up, and after studying a variety of suggestions, I noticed the issue: I hadn’t given readers all the data they wanted to unravel it. That helped join the dots for me. The AI struggles with some coding issues for a similar motive the learners have been scuffling with that train—as a result of the context wasn’t there. Writing a superb coding train and writing a superb immediate each depend upon understanding what the opposite aspect must make sense of the issue.
That have helped me understand that to make builders profitable with AI, we have to do extra than simply train the fundamentals of immediate engineering. We have to explicitly instill these considering habits and provides builders a option to construct them alongside their core coding expertise. If we would like builders to succeed, we will’t simply inform them to “immediate higher.” We have to present them learn how to suppose with AI.
The place We Go from Right here
If AI actually is altering how we write software program—and I imagine it’s—then we have to change how we train it. We’ve made it straightforward to offer individuals entry to the instruments. The tougher half helps them develop the habits and judgment to make use of them properly, particularly when issues go unsuitable. That’s not simply an training drawback; it’s additionally a design drawback, a documentation drawback, and a tooling drawback. Sens-AI is one reply, however it’s just the start. We nonetheless want clearer examples and higher methods to information, debug, and refine the mannequin’s output. If we train builders learn how to suppose with AI, we might help them turn into not simply code mills however considerate engineers who perceive what their code is doing and why it issues.