AI for Project Scaffolding: Generating Boilerplate Without Losing Your Mind

Boilerplate code is a necessary evil. AI tools can now scaffold entire project structures. But how usable is the output in real-world projects?

Jean-Pierre Broeders

Freelance DevOps Engineer

March 30, 20265 min. read
AI for Project Scaffolding: Generating Boilerplate Without Losing Your Mind

AI for Project Scaffolding: Generating Boilerplate Without Losing Your Mind

Every new project starts with the same ritual. Create folder structures, copy config files from a previous project, install dependencies, set up linting, configure the CI pipeline. Half a day gone before a single line of business logic gets written.

AI tools promise to change that. Not by producing magical code, but by handling the tedious parts. The question is whether it actually delivers.

The Boilerplate Problem

A typical .NET project setup involves:

  • Solution file with multiple projects
  • Dependency injection configuration
  • Logging setup (Serilog, OpenTelemetry)
  • Dockerfile and docker-compose
  • GitHub Actions workflow
  • Health checks, middleware, error handling
  • Entity Framework setup with migrations

That's dozens of files following mostly the same patterns. Copying from a template works, but templates go stale quickly. And manually adjusting each time still eats hours.

What AI Tools Actually Do Here

Current-generation code assistants — Copilot, Cursor, Claude — can automate a decent chunk of this setup. Not through a magic "generate project" button, but via targeted prompts.

A prompt like "Generate an ASP.NET 8 Web API project with Serilog, health checks, and Entity Framework with PostgreSQL" typically produces a working base structure. Including Program.cs, appsettings, and the required NuGet packages.

// Typical AI-generated Program.cs - works but needs fine-tuning
var builder = WebApplication.CreateBuilder(args);

builder.Host.UseSerilog((context, config) =>
    config.ReadFrom.Configuration(context.Configuration));

builder.Services.AddDbContext<AppDbContext>(options =>
    options.UseNpgsql(builder.Configuration
        .GetConnectionString("DefaultConnection")));

builder.Services.AddHealthChecks()
    .AddNpgSql(builder.Configuration
        .GetConnectionString("DefaultConnection")!);

var app = builder.Build();

app.UseSerilogRequestLogging();
app.MapHealthChecks("/health");
app.MapControllers();

app.Run();

That's usable. Not perfect — error handling is missing, there's no middleware pipeline, and the connection string handling is naive — but as a starting point it saves fifteen to twenty minutes.

Where It Works Well

Some types of boilerplate are generated near-perfectly:

Docker configuration. A Dockerfile for a .NET API is standardized enough that AI tools produce it flawlessly. Multi-stage builds, correct base images, properly placed COPY and RUN statements. Hard to argue with the results.

CI/CD pipelines. GitHub Actions workflows for build-test-deploy are repetitive enough that the generated output is immediately usable. Especially for standard flows: build, run tests, push Docker image, deploy to staging.

CRUD endpoints. The classic controller-service-repository pattern for a simple entity? AI generates that faster than typing it by hand. Including DTO mapping and basic validation.

Where It Falls Apart

Things get harder with project-specific configuration.

Environment variables and secrets. AI knows the standard patterns but not which secrets a specific project needs. Generated code uses hardcoded strings or generic placeholder names that don't match the actual infrastructure.

Authentication and authorization. This is where things regularly go sideways. The generated JWT configuration works technically but often misses critical details: token rotation, refresh token flows, proper claim mapping. Not the kind of code that should hit production without review.

Project-specific conventions. Every team has its own patterns. Naming conventions, folder structure, how errors get logged. AI defaults to the most common patterns from training data, not what a specific team actually uses.

A Practical Approach

The most effective way to use AI for scaffolding:

StepTaskAI Usefulness
1Generate base structure⭐⭐⭐⭐⭐
2Adjust config files to project⭐⭐⭐
3Authentication/authorization setup⭐⭐
4Apply team-specific conventions

The trick is to let AI handle step 1 entirely and gradually take more manual control through steps 2-4. Avoid cramming everything into a single prompt. Generate the base, review it, then build iteratively from there.

Context Makes All the Difference

A bare prompt yields generic code. But feed the AI an existing Program.cs or docker-compose.yml as reference, and suddenly the output follows project conventions. Tools like Cursor and Continue do this automatically by indexing the codebase. The quality difference is massive.

The same applies to .editorconfig, Directory.Build.props, and other project-wide configuration. More context means better scaffolding that matches what's already in place.

Time Savings

Based on a few months of daily use: spinning up a new project or microservice now takes roughly 30-45 minutes instead of 3-4 hours. Most of that time goes to review and adjustments, not typing.

That's not a revolution. But for teams that regularly stand up new services — microservice architectures, event-driven systems — it adds up. Ten new services per quarter, each saving three hours: that's a full work week back.

The Bottom Line

AI scaffolding isn't a replacement for a well-maintained template repository or a Yeoman generator. It's a complement. For standard patterns, it works remarkably well. For project-specific concerns, human knowledge is still required.

The biggest gain isn't in the initial generation but in iterative refinement. Generate the base, adjust it, then use that adjusted version as context for the next component. That's how to build fast without sacrificing the quality production code demands.

Want to stay updated?

Subscribe to my newsletter or get in touch for freelance projects.

Get in Touch