When AI Copilots Crash (And How to Save Your Users) [for AI builders]

Badr Eddial
Badr Eddial
August 12, 2025
8
min read

Welcome to Pulling Levers - behind-the-scenes lessons, hard-won insights, and unexpected learnings from building Lleverage.

saving users

Generative AI is changing who gets to build software. Tools like Lovable let anyone create websites through conversation. At Lleverage, we're solving how process experts can automate complex workflows without having to think like engineers. The shift is significant: we're moving from technical implementation to intent-based creation, from syntax to natural language.

The typical AI-powered development experience follows a familiar pattern: conversational interface on the left, your evolving output on the right, and natural language as the bridge between idea and implementation. It's magical - until it isn't.

Here's the uncomfortable truth: as AI systems grow more capable, they enable increasingly complex solutions. And complexity breeds fragility. When your AI-generated system hits a bug it can't fix, or when you need a feature it can't comprehend, you tumble into what I call the valley of abandonment.

Recent studies on vibe coding highlight a real risk: as we rely more on AI-generated code, we stop engaging deeply with the code itself. Instead of building real understanding, we just review or approve what the AI spits out. AI-empowered users cruise along the peak UX curve, building sophisticated systems they don’t fully understand - until suddenly the AI hits a wall. When that happens, users find themselves stranded in the valley: they neither understand what was created nor have the skills to solve the problem themselves.

The typical fallback is to dump users into a code editor. It's like teaching someone to fly a plane on autopilot, then handing them the controls mid-flight when the computer fails.

Limitations are inherent to any system. What matters is how gracefully we handle them.

1. The missing middle layer

Many AI products bank on the magical experience of going from intention to results. But when building complex systems, you need an intermediate representation - a conceptual expression that keeps users connected to what's actually being built.

At Lleverage, we think about AI-powered creation through three layers:

  • Intention Layer: Natural language copilot on the left, where users describe desired outcomes
  • Implementation Layer: Visual canvas in the center, breaking down intentions into understandable components
  • Results Layer: The “Run Workflow” panel on the right, where users interact with their created app, chatbot or automation

The implementation layer provides a conceptual anchor. When users say "extract purchase orders from incoming emails and create sales orders," they see their intent decomposed into logical steps: email parsing rules, product matching logic, validation checks, order creation fields. Each component is more specific than natural language but more abstract than code.

This layer must speak the user's language. For developers, this could be code. For Lovable users, a WYSIWYG editor. For process experts, visual nodes. The modality adapts to existing mental models, bridging expertise with system capabilities.

When AI fails, users land here instead of crashing into code. They might not write the implementation, but they can see where the product matching is too strict or the PDF extraction missed a field. The cliff becomes a slope.

2. The holy trinity of trustworthy AI products

Trust is the foundation of effective human-AI collaboration. Building this trust happens naturally through three key elements: latency, quality, and correctability.

Latency keeps users engaged. Quick responses under 5 seconds maintain conversation momentum. For longer operations, progress updates like "Analyzing workflow... identifying bottlenecks..." keep users connected. Once users mentally check out, they lose track of what's being built.

Quality determines how often users hit the cliff edge. Better quality means fewer abandonment moments. But every AI has limits.

Correctability turns black-box changes into clear collaboration. Users need:

  • Visibility into changes (like Cursor's color-coded diffs)
  • Understanding of rationale ("Making product matching more flexible because...")
  • Easy undo/rollback options

These elements create an interconnected safety net. High quality reduces the need for correctability. Fast iteration through low latency can offset quality issues - users quickly course-correct through trial and error. Strong correctability compensates for both quality and latency problems. The optimal balance depends on context: mission-critical systems need quality first, exploratory tools can trade quality for speed, and complex domains should prioritize correctability.

3. Design for human empowerment, not replacement

Effective human-AI collaboration goes beyond simple override controls. As Andrej Karpathy recently pointed out, autonomy works best as a slider, not a switch. This key insight separates truly empowering tools from black box solutions.

At Lleverage, users can direct the copilot to focus on a single node for precision or let it handle the entire workflow for speed. This granular control keeps users in charge at every level.

The copilot should also teach, not just build. Users can ask questions like "What does this do?" or "Why structure it this way?" They can explore what-if scenarios and learn about the platform itself. Through these conversations, users gradually build stronger mental models.

Progressive building reinforces this approach. Instead of generating entire systems at once, AI systems build incrementally: "I'll start with extracting order data from emails. Does this capture all your fields?" Then: "Now for product matching. Here's how I'll map customer SKUs to your catalog - does this look right?" The AI asks instead of guesses: "You mentioned 'rush orders' - what delivery timeframe defines that?" Users validate each step, preventing incorrect assumptions and keeping humans in the loop.

When users understand what's happening, learn as they build, and choose their level of assistance, trust follows naturally and usage will continue.

4. Push the cliff further through corrective behavior

The distance to the valley isn't determined by model intelligence but by thoughtful systems engineering. Smart AI products implement multi-layered validation that catches problems before users encounter them.

When AI creates a workflow, it validates the logic flow: Are all paths reachable? Are all variable references valid? Are all required accounts connected? These aren't model limitations but engineering choices about what to check and when.

Validation becomes teaching when the AI explains its corrections: "The approval workflow has a loop. I'm adding an escalation path to prevent infinite cycles." Runtime feedback works the same way: "The data import failed because the format changed. I'll add a transformation step for both formats." Each fix includes the why, not just the what.

Good engineering builds these checks into the architecture rather than hoping models catch everything. Each corrective cycle helps users understand their systems while making the AI more contextually aware. Users learn to prevent issues, gradually building intuition. This pushes the valley further away through systematic engineering that turns corrections into bridges.

5. Measure your way out of the valley

Most teams test success cases. Smart teams systematically explore failure modes. Your diff mechanism reveals where comprehension breaks down: track acceptance rates, hesitation patterns, and reverts.

Measure what matters: What is the change acceptance rate on each interaction? How many messages does it take to get a single request done?

Every interaction provides signal. Users asking for repeated explanations indicate clarity issues. Manual adjustments reveal patterns the AI should learn. Create a virtuous cycle: detect struggles through behavioural data, reproduce them in benchmarks, validate improvements through the same metrics.

This data-driven approach transforms the valley from a fixed chasm into a narrowing gap. When rejection rates climb or recovery times extend, you'll know users are approaching the edge before they fall.

6. When all else fails: Premium human expertise

Acknowledging limits isn't admitting defeat - it's architecting resilience. Premium support becomes your final bridge across the valley of abandonment an a great way to register blind spots in your data driven approach.

Effective human support in AI products requires context preservation. Support engineers must see not just the problem but the full journey - what the user tried, what the AI attempted, where the breakdown occurred. This context transforms support from troubleshooting to bridge-building.

The handoff itself becomes a product feature. Users should understand they're transitioning from AI to human help, with clear expectations about response times and resolution approaches. The support interaction should feed back into the product, turning today's manual intervention into tomorrow's automated capability.

Bridging the Valley

The valley of abandonment in AI development tools isn't a problem to solve - it's a reality to design around. As AI capability increases, so does the user experience for AI-empowered users. But when AI hits its wall, users need sufficient skills to continue - skills they may not possess. The question is whether that moment becomes a trap or a traversable challenge.

Success requires embracing the valley's existence while building infrastructure to traverse it:

  • Develop an implementation layer that provides conceptual anchors and intermediate representation before users reach the valley
  • Design a building experience that keeps users engaged and in the loop
  • Ensure users maintain control and develop trust with the AI
  • Implement a data-driven approach to system improvement
  • Offer human expertise as a bridge when users' capabilities fall short

The future belongs to AI products that amplify human capability while preserving understanding. By acknowledging the valley upfront and architecting systems to navigate it, we can build tools that genuinely democratize creation - where process experts become automation architects, where understanding scales with capability, and where the valley of abandonment has bridges across, not drops below.