AI-Assisted Software Development

AI-assisted software development is a human-in-the-loop practice where LLMs and coding agents help research codebases, draft plans, write tests, implement changes, debug failures, and generate support tooling while the human supplies intent, taste, constraints, review, and final responsibility.source: mark-erikson-ai-thoughts-part-1-2026.md

Mark Erikson's "My Thoughts on AI" describes a shift from fear and refusal to pragmatic use: first asking AI to explain unfamiliar architecture, then using it to write targeted tests, attempt library features, build lint rules, optimize Immer, and expand Replay/React instrumentation. The common pattern is not "turn the brain off forever," but using agents to accelerate work while repeatedly rebuilding a mental model and correcting the output.source: mark-erikson-ai-thoughts-part-1-2026.md

The article's most useful technical stance is that non-deterministic AI output can be made sufficiently bounded by deterministic scaffolding: tests, typechecking, linting, CI, code formatters, static analysis, prompt/context files, explicit plans, and human review. Erikson still wants deterministic code and predictable systems; he argues the trick is to minimize what the LLM must improvise, encode repeatable knowledge into scripts and tools, and use the LLM around that automation.source: mark-erikson-ai-thoughts-part-1-2026.md

This connects directly to harness-engineering: agent productivity depends on the surrounding harness, not just model capability. It also complicates personal-agents because the best experience may feel like delegated work, but it still relies on context, permissions, diagnostics, and verification loops.

Addy Osmani's cognitive-surrender framing adds a sharper failure mode: AI help becomes dangerous when generated output replaces the engineer's independent view instead of extending it. In this view, the same tool can either build skill through conceptual inquiry and review, or create comprehension debt when code ships faster than human understanding grows.source: addy-osmani-cognitive-surrender-2026.md

Thariq's html-artifacts proposal adds an interface layer to this practice: if agents are producing large specs, reviews, reports, and plans, the output format should help humans understand and steer the work. HTML artifacts can render diagrams, annotated diffs, prototypes, and custom editors, making review and iteration easier than reading a long markdown file.source: thariq-unreasonable-effectiveness-html-2026.md

Open questions the source raises:

At Shopify scale, shopify-river shows AI-assisted development as a social system rather than a private IDE loop: agent-created PRs are authored by River but reviewed by humans, and the surrounding Slack conversations let employees observe how others scope requests, debug, query logs, and improve shared instructions.source: tobi-lutke-learning-shop-floor-river-2026.md

Related pages: mark-erikson, addy-osmani, harness-engineering, personal-agents, self-improving-knowledge-base, cognitive-surrender, html-artifacts, shopify-river, public-agent-collaboration.

Resources