Skip to content

chore(deps)(deps): Bump actions/checkout from 4 to 6

9fe86ba
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Open

chore(deps)(deps): Bump actions/checkout from 4 to 6 #12

chore(deps)(deps): Bump actions/checkout from 4 to 6
9fe86ba
Select commit
Loading
Failed to load commit list.
GitHub Actions / Test Results succeeded Feb 2, 2026 in 1s

29 passed, 18 failed and 8 skipped

Tests failed

❌ ./test-results/e2e/artifacts/test-results/e2e-tests.trx

55 tests were completed in 1382s with 29 passed, 18 failed and 8 skipped.

Test suite Passed Failed Skipped Time
JdhPro.Tests.E2E.Features.BlogListingPageFeature 3✅ 5⚪ 76s
JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature 11❌ 802s
JdhPro.Tests.E2E.Features.BlogSyndicationFeature 9✅ 2❌ 14s
JdhPro.Tests.E2E.Features.ContactPageFeature 3✅ 12s
JdhPro.Tests.E2E.Features.HomepageFeature 3❌ 88s
JdhPro.Tests.E2E.Features.NavigationFeature 2✅ 47s
JdhPro.Tests.E2E.Features.PerformanceAndLoadingFeature 11✅ 1❌ 127s
JdhPro.Tests.E2E.Features.ProjectsPageFeature 3⚪ 3ms
JdhPro.Tests.E2E.Features.ServicesPageFeature 1✅ 1❌ 7s

❌ JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature

❌ Blog post detail page loads successfully
	Expected title not to be <null> because Post should have a title.
❌ Blog post displays full content
	Expected content "<html><head></head><body></body></html>" to contain "<article" because Post content should be rendered as article.
❌ Blog post displays metadata
	Expected date not to be <null> because Post should display publish date.
❌ Blog post displays reading time
	Expected readingTime not to be <null> because Post should display reading time.
❌ Blog post has proper meta tags for SEO
	Expected title not to be <null> or empty because Page should have a title, but found "".
❌ Blog post has table of contents
	Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
	using System;
	using Reqnroll;
	
	namespace MyNamespace
	{
	    [Binding]
	    public class StepDefinitions
	    {
	        private readonly IReqnrollOutputHelper _outputHelper;
	
	        public StepDefinitions(IReqnrollOutputHelper outputHelper)
	        {
	            _outputHelper = outputHelper;
	        }
	        [When(@"^I navigate to the blog post detail page$")]
	        public void WhenINavigateToTheBlogPostDetailPage()
	        {
	            throw new PendingStepException();
	        }
	    }
	}
	
❌ Blog post URL uses slug
	Expected matchingPost not to be <null> because URL should contain a valid post slug.
❌ Code snippets are properly formatted
	Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
	using System;
	using Reqnroll;
	
	namespace MyNamespace
	{
	    [Binding]
	    public class StepDefinitions
	    {
	        private readonly IReqnrollOutputHelper _outputHelper;
	
	        public StepDefinitions(IReqnrollOutputHelper outputHelper)
	        {
	            _outputHelper = outputHelper;
	        }
	        [When(@"^I navigate to the blog post detail page$")]
	        public void WhenINavigateToTheBlogPostDetailPage()
	        {
	            throw new PendingStepException();
	        }
	        
	        [Then(@"^code blocks should have syntax highlighting$")]
	        public void ThenCodeBlocksShouldHaveSyntaxHighlighting()
	        {
	            throw new PendingStepException();
	        }
	    }
	}
	
❌ Navigate between blog posts
	Expected link not to be <null> because Next Post link should exist.
❌ Related posts are displayed
	Expected relatedPosts not to be <null> because Post should have related posts section.
❌ Syndicated posts display canonical URL
	Expected canonicalLink not to be <null> because Syndicated post should have a canonical URL link.

❌ JdhPro.Tests.E2E.Features.BlogSyndicationFeature

❌ Filter by included categories
	Expected _posts to contain only items matching p.Categories.Contains("Technical", StringComparer.OrdinalIgnoreCase) because All posts should be in the Technical category, but JdhPro.Tests.E2E.Models.BlogPostDto
	    {
	        CanonicalUrl = "https://jerrettdavis.com/blog/jd.efcpt.build",
	        Categories = {"Programming", "Programming/Tooling", "Programming/Databases"},
	        Content = "# "Where Did Database First Go?"
	
	If you were using Entity Framework when EF Core first dropped, you probably remember the moment you went looking for database-first support and found... nothing.
	
	EF Core launched as a code-first framework. The Reverse Engineer tooling that EF6 developers relied on (the right-click, point at a database, generate your models workflow) wasn't there. Microsoft's position was essentially "migrations are the future, figure it out." And if your team had an existing database, or a DBA who actually owned the schema, or compliance requirements that meant the database was the source of truth... well, good luck with that.
	
	The community's response was immediate and loud. "Where did database first go?" became a recurring theme in GitHub issues, Stack Overflow questions, and the quiet frustration of developers who just wanted to talk to their database without hand-writing a hundred entity classes.
	
	Eventually, tooling caught up. EF Core Power Tools emerged as the community answer: a Visual Studio extension that brought back the reverse engineering workflow. You could point it at a database or a DACPAC, configure some options, and generate your models. Problem solved, mostly.
	
	But here's the thing about manual processes: they work fine right up until they don't.
	
	---
	
	## The Problem That Keeps Happening
	
	I've spent enough time in codebases with legacy data layers to recognize a pattern. It goes something like this:
	
	A project starts with good intentions. Someone sets up EF Core Power Tools, generates the initial models, commits everything, and documents the process. "When the schema changes, regenerate the models using this tool with these settings." Clear enough.
	
	Then time passes.
	
	The developer who set it up leaves. The documentation gets stale. Someone regenerates with slightly different settings and commits the result. Someone else forgets to regenerate entirely after a schema change. The models drift. The configuration drifts. Nobody's quite sure what the "correct" regeneration process is anymore, so people just... stop doing it consistently.
	
	This isn't a dramatic failure. It's a slow erosion. The kind of problem that doesn't announce itself until you're debugging a production issue and realize the entity class doesn't have a column that's been in the database for six months.
	
	If you've worked in a codebase long enough, you've probably seen some version of this. Maybe you've been the person who discovered the drift. Maybe you've been the person who caused it. (No judgment. We've all been there.)
	
	The frustrating part is that the fix is always the same: regenerate the models, commit the changes, remind everyone to regenerate after schema changes. And then six months later, you're having the same conversation again.
	
	---
	
	## Why Manual Regeneration Fails
	
	Let's be specific about what goes wrong, because understanding the failure modes is the first step toward fixing them.
	
	**The ownership problem.** Whose job is it to regenerate after a schema change? The person who changed the schema? The person who owns the data layer? The tech lead? Nobody has a clear answer, which means sometimes everyone does it (chaos) and sometimes nobody does it (drift).
	
	**The configuration problem.** EF Core Power Tools stores settings in JSON files. Namespaces, nullable reference types, navigation property generation, renaming rules. There are dozens of options. If developers regenerate with different configurations, you get inconsistent output. Same database, different generated code.
	
	**The tooling problem.** Regeneration requires Visual Studio with the extension installed. CI servers don't have Visual Studio. New developers might not have the extension. Remote development setups might not support it. The process that works on one machine doesn't necessarily work on another.
	
	**The noise problem.** Regeneration often produces massive diffs. Property reordering, whitespace changes, attribute additions. Stuff that doesn't represent actual schema changes but clutters up the commit. Developers learn to distrust regeneration diffs, which makes them reluctant to regenerate, which makes the problem worse.
	
	**The timing problem.** Even when everyone knows the process, there's no enforcement. You can commit code that references a column the models don't have, and the build might still pass if nothing actually uses that code path yet. The error surfaces later, in a different context, when the connection to the original schema change is long forgotten.
	
	None of these are individually catastrophic. Together, they add up to a process that works in theory but fails in practice.
	
	---
	
	## The Idea
	
	Here's the thought that eventually became this project: if model generation can be invoked from the command line (and it can, via EF Core Power Tools CLI), then model generation can be part of the build.
	
	Not a separate step you remember to run. Not a manual process with unclear ownership. Just part of what happens when you run `dotnet build`.
	
	The build already knows how to compile your code. It already knows how to restore packages, run analyzers, produce artifacts. Adding "generate EF Core models from the schema" to that list isn't conceptually different from any other build-time code generation.
	
	If the build handles it, the ownership question disappears. The build owns it. If the build handles it with consistent configuration, the drift disappears. Everyone gets the same output. If the build handles it on every machine, the tooling problem disappears. No special extensions required.
	
	This is JD.Efcpt.Build: an MSBuild integration that makes EF Core model generation automatic.
	
	---
	
	## How It Actually Works
	
	The package hooks into your build through MSBuild targets that run before compilation. When you build, it:
	
	1. **Finds your schema source.** Either a SQL Server Database Project (`.sqlproj`) that gets compiled to a DACPAC, or a connection string pointing to a live database.
	
	2. **Computes a fingerprint.** A hash of all the inputs: the DACPAC or schema metadata, the configuration file, the renaming rules, any custom templates. This fingerprint represents "the current state of everything that affects generation."
	
	3. **Compares to the previous fingerprint.** If they match, nothing changed, and generation is skipped. If they differ, something changed, and generation runs.
	
	4. **Generates models.** Using EF Core Power Tools CLI, same as you'd run manually, but automated. Output goes to `obj/efcpt/Generated/` with a `.g.cs` extension.
	
	5. **Adds generated files to compilation.** Automatically. You don't edit your project file or manage includes.
	
	The fingerprinting is what makes this practical. You don't want generation running on every build. That would be slow and developers would hate it. The fingerprint check is fast (XxHash64, designed for exactly this kind of content comparison), so incremental builds have essentially zero overhead. Generation only runs when inputs actually change.
	
	---
	
	## Two Ways to Get Your Schema
	
	Different teams manage database schemas differently, so the package supports two modes.
	
	**DACPAC Mode** is for teams with SQL Server Database Projects. You have a `.sqlproj` that defines your schema in version-controlled SQL files. The package builds this project to produce a DACPAC, then generates models from that DACPAC.
	
	```xml
	<PropertyGroup>
	  <EfcptSqlProj>..\Database\MyDatabase.sqlproj</EfcptSqlProj>
	</PropertyGroup>
	```
	
	This is nice because your schema is code. It lives in source control. Changes go through pull requests. The DACPAC is a build artifact, and models are derived from that artifact deterministically.
	
	**Connection String Mode** is for teams without database projects. Maybe you apply migrations to a dev database and want to scaffold from that. Maybe you're working against a cloud database. Maybe you just don't want to deal with DACPACs.
	
	```xml
	<PropertyGroup>
	  <EfcptConnectionString>$(DB_CONNECTION_STRING)</EfcptConnectionString>
	</PropertyGroup>
	```
	
	The package connects, queries system tables to understand the schema, and generates from that. The fingerprint is computed from the schema metadata, so incremental builds still work. If the schema hasn't changed, generation is skipped.
	
	Both modes use the same configuration files and produce the same kind of output. They just differ in where the schema comes from.
	
	---
	
	## Setting It Up
	
	The minimum setup is almost trivial:
	
	```xml
	<ItemGroup>
	  <PackageReference Include="JD.Efcpt.Build" Version="1.0.0" />
	</ItemGroup>
	```
	
	If you have a `.sqlproj` in your solution and an `efcpt-config.json` in your project directory, that's it. Run `dotnet build` and models appear.
	
	For more control, you add configuration. The `efcpt-config.json` controls generation behavior:
	
	```json
	{
	  "names": {
	    "root-namespace": "MyApp.Data",
	    "dbcontext-name": "ApplicationDbContext"
	  },
	  "code-generation": {
	    "use-nullable-reference-types": true,
	    "enable-on-configuring": false
	  }
	}
	```
	
	The `enable-on-configuring: false` means your DbContext won't have a hardcoded connection string. You configure that in your DI container, where it belongs.
	
	If your database uses naming conventions that don't map cleanly to C#, you add renaming rules:
	
	```json
	[
	  {
	    "SchemaName": "dbo",
	    "Tables": [
	      {
	        "Name": "tbl_Users",
	        "NewName": "User",
	        "Columns": [
	          { "Name": "user_id", "NewName": "Id" }
	        ]
	      }
	    ]
	  }
	]
	```
	
	Now `tbl_Users.user_id` becomes `User.Id`. The database can keep its conventions, your C# code can have its conventions, and the mapping is explicit and version-controlled.
	
	---
	
	## What About Custom Code?
	
	A reasonable concern: "I have computed properties and validation methods on my entities. Won't regeneration overwrite those?"
	
	This is what partial classes are for.
	
	The generated entity is one half:
	
	```csharp
	// obj/efcpt/Generated/User.g.cs
	public partial class User
	{
	    public int Id { get; set; }
	    public string Email { get; set; }
	    public string FirstName { get; set; }
	    public string LastName { get; set; }
	}
	```
	
	Your custom logic is the other half:
	
	```csharp
	// Models/User.cs
	public partial class User
	{
	    public string FullName => $"{FirstName} {LastName}";
	
	    public bool HasValidEmail => Email?.Contains("@") ?? false;
	}
	```
	
	Both compile into a single class. The generated half gets regenerated every build. Your custom half stays exactly as you wrote it.
	
	This separation is actually cleaner than mixing generated and custom code in the same file. You know at a glance what's generated (`.g.cs` in `obj/`) and what's yours (everything else).
	
	---
	
	## CI/CD Without Special Steps
	
	One of the pain points I mentioned earlier was CI/CD. Manual regeneration doesn't work in automated pipelines. You're stuck either committing generated code (merge conflicts) or maintaining custom regeneration scripts (fragile).
	
	With the build handling generation, CI just works:
	
	```yaml
	steps:
	  - uses: actions/checkout@v4
	
	  - name: Setup .NET
	    uses: actions/setup-dotnet@v4
	    with:
	      dotnet-version: '10.0.x'
	
	  - name: Build
	    run: dotnet build --configuration Release
	    env:
	      DB_CONNECTION_STRING: ${{ secrets.DB_CONNECTION_STRING }}
	```
	
	No special steps for EF Core generation. The build handles it. On .NET 10+, the package uses `dotnet dnx` to execute the tool directly from the package feed without requiring installation. On older versions, it uses tool manifests or global tools.
	
	Pull requests that include schema changes automatically include the corresponding model changes, because both happen during the build. Schema and code are validated together.
	
	---
	
	## When Things Go Wrong
	
	Things will go wrong. Here's how you figure out what happened.
	
	**Enable verbose logging:**
	
	```xml
	<PropertyGroup>
	  <EfcptLogVerbosity>detailed</EfcptLogVerbosity>
	</PropertyGroup>
	```
	
	Build output now includes exactly what's happening: which inputs were found, what fingerprint was computed, whether generation ran or was skipped.
	
	**Check the resolved inputs:**
	
	After a build, look at `obj/efcpt/resolved-inputs.json`. This shows exactly what the package found for each input. If something's wrong, you'll see it here.
	
	**Inspect the fingerprint:**
	
	The fingerprint is stored at `obj/efcpt/fingerprint.txt`. If generation is running unexpectedly (or not running when it should), the fingerprint tells you whether inputs changed from the package's perspective.
	
	---
	
	## Who This Is For
	
	I want to be honest about fit.
	
	**This is probably for you if:**
	
	You're doing database-first development and you've experienced the regeneration coordination problem. The "did someone regenerate?" question has come up, and the answer wasn't always clear.
	
	Your schema changes regularly. If you're shipping schema changes weekly, manual regeneration becomes friction.
	
	You want builds that work identically everywhere. Local machines, CI servers, new developer laptops. Everyone should get the same generated code from the same inputs.
	
	**This probably isn't for you if:**
	
	Your schema is essentially static. If schema changes are rare, manual regeneration isn't that painful.
	
	You're using code-first migrations. If migrations are your source of truth, you're solving a different problem.
	
	You're not using EF Core Power Tools already. This package automates EF Core Power Tools; if you're using a different generation approach, this doesn't apply.
	
	---
	
	## The Groan, Addressed
	
	"Where did database first go?"
	
	It's been years since EF Core launched without reverse engineering, and the tooling has caught up. EF Core Power Tools exists. The CLI exists. The capability is there.
	
	But capability isn't the same as workflow. Having the tools isn't the same as having a process that works reliably across a team, across time, across environments.
	
	JD.Efcpt.Build is an attempt to close that gap. To take the capability that exists and make it automatic. To make the build the owner of model generation, so humans don't have to remember to do it.
	
	Your database schema is the source of truth. This package just makes sure your code reflects that truth, every time you build, without manual intervention.
	
	One less thing to coordinate. One less thing to forget. One less thing to go wrong in production because a manual step got skipped.
	
	That's the pitch. Give it a try if it fits your situation.
	
	---
	
	*JD.Efcpt.Build is [open source](https://github.com/JerrettDavis/JD.Efcpt.Build) and available on [NuGet](https://www.nuget.org/packages/JD.Efcpt.Build).*",
	        ContentHtml = "<h1 id="where-did-database-first-go">&quot;Where Did Database First Go?&quot;</h1>
	<p>If you were using Entity Framework when EF Core first dropped, you probably remember the moment you went looking for database-first support and found... nothing.</p>
	<p>EF Core launched as a code-first framework. The Reverse Engineer tooling that EF6 developers relied on (the right-click, point at a database, generate your models workflow) wasn't there. Microsoft's position was essentially &quot;migrations are the future, figure it out.&quot; And if your team had an existing database, or a DBA who actually owned the schema, or compliance requirements that meant the database was the source of truth... well, good luck with that.</p>
	<p>The community's response was immediate and loud. &quot;Where did database first go?&quot; became a recurring theme in GitHub issues, Stack Overflow questions, and the quiet frustration of developers who just wanted to talk to their database without hand-writing a hundred entity classes.</p>
	<p>Eventually, tooling caught up. EF Core Power Tools emerged as the community answer: a Visual Studio extension that brought back the reverse engineering workflow. You could point it at a database or a DACPAC, configure some options, and generate your models. Problem solved, mostly.</p>
	<p>But here's the thing about manual processes: they work fine right up until they don't.</p>
	<hr />
	<h2 id="the-problem-that-keeps-happening">The Problem That Keeps Happening</h2>
	<p>I've spent enough time in codebases with legacy data layers to recognize a pattern. It goes something like this:</p>
	<p>A project starts with good intentions. Someone sets up EF Core Power Tools, generates the initial models, commits everything, and documents the process. &quot;When the schema changes, regenerate the models using this tool with these settings.&quot; Clear enough.</p>
	<p>Then time passes.</p>
	<p>The developer who set it up leaves. The documentation gets stale. Someone regenerates with slightly different settings and commits the result. Someone else forgets to regenerate entirely after a schema change. The models drift. The configuration drifts. Nobody's quite sure what the &quot;correct&quot; regeneration process is anymore, so people just... stop doing it consistently.</p>
	<p>This isn't a dramatic failure. It's a slow erosion. The kind of problem that doesn't announce itself until you're debugging a production issue and realize the entity class doesn't have a column that's been in the database for six months.</p>
	<p>If you've worked in a codebase long enough, you've probably seen some version of this. Maybe you've been the person who discovered the drift. Maybe you've been the person who caused it. (No judgment. We've all been there.)</p>
	<p>The frustrating part is that the fix is always the same: regenerate the models, commit the changes, remind everyone to regenerate after schema changes. And then six months later, you're having the same conversation again.</p>
	<hr />
	<h2 id="why-manual-regeneration-fails">Why Manual Regeneration Fails</h2>
	<p>Let's be specific about what goes wrong, because understanding the failure modes is the first step toward fixing them.</p>
	<p><strong>The ownership problem.</strong> Whose job is it to regenerate after a schema change? The person who changed the schema? The person who owns the data layer? The tech lead? Nobody has a clear answer, which means sometimes everyone does it (chaos) and sometimes nobody does it (drift).</p>
	<p><strong>The configuration problem.</strong> EF Core Power Tools stores settings in JSON files. Namespaces, nullable reference types, navigation property generation, renaming rules. There are dozens of options. If developers regenerate with different configurations, you get inconsistent output. Same database, different generated code.</p>
	<p><strong>The tooling problem.</strong> Regeneration requires Visual Studio with the extension installed. CI servers don't have Visual Studio. New developers might not have the extension. Remote development setups might not support it. The process that works on one machine doesn't necessarily work on another.</p>
	<p><strong>The noise problem.</strong> Regeneration often produces massive diffs. Property reordering, whitespace changes, attribute additions. Stuff that doesn't represent actual schema changes but clutters up the commit. Developers learn to distrust regeneration diffs, which makes them reluctant to regenerate, which makes the problem worse.</p>
	<p><strong>The timing problem.</strong> Even when everyone knows the process, there's no enforcement. You can commit code that references a column the models don't have, and the build might still pass if nothing actually uses that code path yet. The error surfaces later, in a different context, when the connection to the original schema change is long forgotten.</p>
	<p>None of these are individually catastrophic. Together, they add up to a process that works in theory but fails in practice.</p>
	<hr />
	<h2 id="the-idea">The Idea</h2>
	<p>Here's the thought that eventually became this project: if model generation can be invoked from the command line (and it can, via EF Core Power Tools CLI), then model generation can be part of the build.</p>
	<p>Not a separate step you remember to run. Not a manual process with unclear ownership. Just part of what happens when you run <code>dotnet build</code>.</p>
	<p>The build already knows how to compile your code. It already knows how to restore packages, run analyzers, produce artifacts. Adding &quot;generate EF Core models from the schema&quot; to that list isn't conceptually different from any other build-time code generation.</p>
	<p>If the build handles it, the ownership question disappears. The build owns it. If the build handles it with consistent configuration, the drift disappears. Everyone gets the same output. If the build handles it on every machine, the tooling problem disappears. No special extensions required.</p>
	<p>This is JD.Efcpt.Build: an MSBuild integration that makes EF Core model generation automatic.</p>
	<hr />
	<h2 id="how-it-actually-works">How It Actually Works</h2>
	<p>The package hooks into your build through MSBuild targets that run before compilation. When you build, it:</p>
	<ol>
	<li><p><strong>Finds your schema source.</strong> Either a SQL Server Database Project (<code>.sqlproj</code>) that gets compiled to a DACPAC, or a connection string pointing to a live database.</p>
	</li>
	<li><p><strong>Computes a fingerprint.</strong> A hash of all the inputs: the DACPAC or schema metadata, the configuration file, the renaming rules, any custom templates. This fingerprint represents &quot;the current state of everything that affects generation.&quot;</p>
	</li>
	<li><p><strong>Compares to the previous fingerprint.</strong> If they match, nothing changed, and generation is skipped. If they differ, something changed, and generation runs.</p>
	</li>
	<li><p><strong>Generates models.</strong> Using EF Core Power Tools CLI, same as you'd run manually, but automated. Output goes to <code>obj/efcpt/Generated/</code> with a <code>.g.cs</code> extension.</p>
	</li>
	<li><p><strong>Adds generated files to compilation.</strong> Automatically. You don't edit your project file or manage includes.</p>
	</li>
	</ol>
	<p>The fingerprinting is what makes this practical. You don't want generation running on every build. That would be slow and developers would hate it. The fingerprint check is fast (XxHash64, designed for exactly this kind of content comparison), so incremental builds have essentially zero overhead. Generation only runs when inputs actually change.</p>
	<hr />
	<h2 id="two-ways-to-get-your-schema">Two Ways to Get Your Schema</h2>
	<p>Different teams manage database schemas differently, so the package supports two modes.</p>
	<p><strong>DACPAC Mode</strong> is for teams with SQL Server Database Projects. You have a <code>.sqlproj</code> that defines your schema in version-controlled SQL files. The package builds this project to produce a DACPAC, then generates models from that DACPAC.</p>
	<pre><code class="language-xml">
	&lt;/PropertyGroup&gt;
	</code></pre>
	<p>This is nice because your schema is code. It lives in source control. Changes go through pull requests. The DACPAC is a build artifact, and models are derived from that artifact deterministically.</p>
	<p><strong>Connection String Mode</strong> is for teams without database projects. Maybe you apply migrations to a dev database and want to scaffold from that. Maybe you're working against a cloud database. Maybe you just don't want to deal with DACPACs.</p>
	<pre><code class="language-xml">
	&lt;/PropertyGroup&gt;
	</code></pre>
	<p>The package connects, queries system tables to understand the schema, and generates from that. The fingerprint is computed from the schema metadata, so incremental builds still work. If the schema hasn't changed, generation is skipped.</p>
	<p>Both modes use the same configuration files and produce the same kind of output. They just differ in where the schema comes from.</p>
	<hr />
	<h2 id="setting-it-up">Setting It Up</h2>
	<p>The minimum setup is almost trivial:</p>
	<pre><code class="language-xml">
	</code></pre>
	<p>If you have a <code>.sqlproj</code> in your solution and an <code>efcpt-config.json</code> in your project directory, that's it. Run <code>dotnet build</code> and models appear.</p>
	<p>For more control, you add configuration. The <code>efcpt-config.json</code> controls generation behavior:</p>
	<pre><code class="language-json">{
	  &quot;names&quot;: {
	    &quot;root-namespace&quot;: &quot;MyApp.Data&quot;,
	    &quot;dbcontext-name&quot;: &quot;ApplicationDbContext&quot;
	  },
	  &quot;code-generation&quot;: {
	    &quot;use-nullable-reference-types&quot;: true,
	    &quot;enable-on-configuring&quot;: false
	  }
	}
	</code></pre>
	<p>The <code>enable-on-configuring: false</code> means your DbContext won't have a hardcoded connection string. You configure that in your DI container, where it belongs.</p>
	<p>If your database uses naming conventions that don't map cleanly to C#, you add renaming rules:</p>
	<pre><code class="language-json">[
	  {
	    &quot;SchemaName&quot;: &quot;dbo&quot;,
	    &quot;Tables&quot;: [
	      {
	        &quot;Name&quot;: &quot;tbl_Users&quot;,
	        &quot;NewName&quot;: &quot;User&quot;,
	        &quot;Columns&quot;: [
	          { &quot;Name&quot;: &quot;user_id&quot;, &quot;NewName&quot;: &quot;Id&quot; }
	        ]
	      }
	    ]
	  }
	]
	</code></pre>
	<p>Now <code>tbl_Users.user_id</code> becomes <code>User.Id</code>. The database can keep its conventions, your C# code can have its conventions, and the mapping is explicit and version-controlled.</p>
	<hr />
	<h2 id="what-about-custom-code">What About Custom Code?</h2>
	<p>A reasonable concern: &quot;I have computed properties and validation methods on my entities. Won't regeneration overwrite those?&quot;</p>
	<p>This is what partial classes are for.</p>
	<p>The generated entity is one half:</p>
	<pre><code class="language-csharp">// obj/efcpt/Generated/User.g.cs
	public partial class User
	{
	    public int Id { get; set; }
	    public string Email { get; set; }
	    public string FirstName { get; set; }
	    public string LastName { get; set; }
	}
	</code></pre>
	<p>Your custom logic is the other half:</p>
	<pre><code class="language-csharp">// Models/User.cs
	public partial class User
	{
	    public string FullName =&gt; $&quot;{FirstName} {LastName}&quot;;
	
	    public bool HasValidEmail =&gt; Email?.Contains(&quot;@&quot;) ?? false;
	}
	</code></pre>
	<p>Both compile into a single class. The generated half gets regenerated every build. Your custom half stays exactly as you wrote it.</p>
	<p>This separation is actually cleaner than mixing generated and custom code in the same file. You know at a glance what's generated (<code>.g.cs</code> in <code>obj/</code>) and what's yours (everything else).</p>
	<hr />
	<h2 id="cicd-without-special-steps">CI/CD Without Special Steps</h2>
	<p>One of the pain points I mentioned earlier was CI/CD. Manual regeneration doesn't work in automated pipelines. You're stuck either committing generated code (merge conflicts) or maintaining custom regeneration scripts (fragile).</p>
	<p>With the build handling generation, CI just works:</p>
	<pre><code class="language-yaml">steps:
	  - uses: actions/checkout@v4
	
	  - name: Setup .NET
	    uses: actions/setup-dotnet@v4
	    with:
	      dotnet-version: '10.0.x'
	
	  - name: Build
	    run: dotnet build --configuration Release
	    env:
	      DB_CONNECTION_STRING: ${{ secrets.DB_CONNECTION_STRING }}
	</code></pre>
	<p>No special steps for EF Core generation. The build handles it. On .NET 10+, the package uses <code>dotnet dnx</code> to execute the tool directly from the package feed without requiring installation. On older versions, it uses tool manifests or global tools.</p>
	<p>Pull requests that include schema changes automatically include the corresponding model changes, because both happen during the build. Schema and code are validated together.</p>
	<hr />
	<h2 id="when-things-go-wrong">When Things Go Wrong</h2>
	<p>Things will go wrong. Here's how you figure out what happened.</p>
	<p><strong>Enable verbose logging:</strong></p>
	<pre><code class="language-xml">
	&lt;/PropertyGroup&gt;
	</code></pre>
	<p>Build output now includes exactly what's happening: which inputs were found, what fingerprint was computed, whether generation ran or was skipped.</p>
	<p><strong>Check the resolved inputs:</strong></p>
	<p>After a build, look at <code>obj/efcpt/resolved-inputs.json</code>. This shows exactly what the package found for each input. If something's wrong, you'll see it here.</p>
	<p><strong>Inspect the fingerprint:</strong></p>
	<p>The fingerprint is stored at <code>obj/efcpt/fingerprint.txt</code>. If generation is running unexpectedly (or not running when it should), the fingerprint tells you whether inputs changed from the package's perspective.</p>
	<hr />
	<h2 id="who-this-is-for">Who This Is For</h2>
	<p>I want to be honest about fit.</p>
	<p><strong>This is probably for you if:</strong></p>
	<p>You're doing database-first development and you've experienced the regeneration coordination problem. The &quot;did someone regenerate?&quot; question has come up, and the answer wasn't always clear.</p>
	<p>Your schema changes regularly. If you're shipping schema changes weekly, manual regeneration becomes friction.</p>
	<p>You want builds that work identically everywhere. Local machines, CI servers, new developer laptops. Everyone should get the same generated code from the same inputs.</p>
	<p><strong>This probably isn't for you if:</strong></p>
	<p>Your schema is essentially static. If schema changes are rare, manual regeneration isn't that painful.</p>
	<p>You're using code-first migrations. If migrations are your source of truth, you're solving a different problem.</p>
	<p>You're not using EF Core Power Tools already. This package automates EF Core Power Tools; if you're using a different generation approach, this doesn't apply.</p>
	<hr />
	<h2 id="the-groan-addressed">The Groan, Addressed</h2>
	<p>&quot;Where did database first go?&quot;</p>
	<p>It's been years since EF Core launched without reverse engineering, and the tooling has caught up. EF Core Power Tools exists. The CLI exists. The capability is there.</p>
	<p>But capability isn't the same as workflow. Having the tools isn't the same as having a process that works reliably across a team, across time, across environments.</p>
	<p>JD.Efcpt.Build is an attempt to close that gap. To take the capability that exists and make it automatic. To make the build the owner of model generation, so humans don't have to remember to do it.</p>
	<p>Your database schema is the source of truth. This package just makes sure your code reflects that truth, every time you build, without manual intervention.</p>
	<p>One less thing to coordinate. One less thing to forget. One less thing to go wrong in production because a manual step got skipped.</p>
	<p>That's the pitch. Give it a try if it fits your situation.</p>
	<hr />
	<p><em>JD.Efcpt.Build is <a href="https://github.com/JerrettDavis/JD.Efcpt.Build">open source</a> and available on <a href="https://www.nuget.org/packages/JD.Efcpt.Build">NuGet</a>.</em></p>
	",
	        Date = <2025-12-21>,
	        Description = "JD.Efcpt.Build automates EF Core Power Tools scaffolding during the build, keeping database-first models in sync with schema changes without manual regeneration.",
	        Featured = <null>,
	        Id = "jd.efcpt.build",
	        Series = <null>,
	        SeriesOrder = <null>,
	        Source = "syndicated",
	        Stub = "JD.Efcpt.Build automates EF Core Power Tools scaffolding during the build, keeping database-first models in sync with schema changes without manual regeneration.",
	        Tags = {"jd-efcpt-build", "dotnet", "ef-core", "database-first", "msbuild", "code-generation", "tooling", "automation"},
	        Title = "JD.Efcpt.Build",
	        UseToc = True,
	        WordCount = 1953
	    },
	    JdhPro.Tests.E2E.Models.BlogPostDto
	    {
	        CanonicalUrl = "https://jerrettdavis.com/blog/you-dont-hate-abstractions",
	        Categories = {"Programming", "Software Engineering", "Architecture"},
	        Content = "It’s an hour until you’re free for the weekend, and you’re trying to knock out one
	last ticket before you escape into whatever assuredly action-packed plans await you.
	You spot a seemingly harmless task: "Add Middle Initial to User Name Display."
	
	You chuckle. Easy. A palate cleanser. A victory lap.
	You assign the ticket, flip it from `New` to `Active`, and let your IDE warm up
	while you drift into a pleasant daydream about not being here.
	
	But then the search results begin to appear.
	Slowly. Line by line.
	
	And your reverie begins to rot.
	
	> `IUserNameStrategy`, `UserNameContext`, `UserNameDisplayStrategyFactory`,
	> `StandardUserNameDisplayStrategy`, `FormalUserNameDisplayStrategy`,
	> `InformalUserNameDisplayStrategy`, `UserNameDisplayModule`, …
	
	Incredulous, your chuckle hardens into a throat-scraping noise somewhere
	between a laugh and a cry.
	
	"What in the Gang-of-Fuck is happening here," you think, feeling your pulse tick up.
	"Either someone read [Refactoring.Guru](https://refactoring.guru) like it was scripture and decided to
	baptize the codebase in every pattern they learned, or a grizzled enterprise
	veteran escaped some Netflix-adjacent monolith and is trying to upskill in
	TypeScript. Because surely no sane developer would build this much redirection
	for such a trivial feature… right?"
	
	That tiny, spiraling task is a perfect microcosm of a continuous debate across engineering circles: when does abstraction help, and when does it become a hindrance?
	
	---
	
	I recently stumbled across the humorous article [You're Not Building Netflix: Stop Coding Like You Are](https://dev.to/adamthedeveloper/youre-not-building-netflix-stop-coding-like-you-are-1707?ref=dailydev) by Adam — The Developer. But though it resonates in many ways, its broader critique is ultimately misdirected.
	
	Adam opens with a more complete version of the code mocked in my prologue, and uses the verbosity and obscurity of that abstraction pile as the springboard for a rebuke of using enterprise patterns pretty much across the board. While the author does allow for some abstractions, they're limited in application and scope.
	
	The problem isn’t that the complaint is wrong. It’s that it points the finger at the wrong culprit, just about literally missing the forest for the trees.
	
	Abstractions are fundamental and essential. They are the elementary particles of software, the quarks and leptons that bind into the subatomic structures that become the atoms our earliest techno-wizards fused into molecules. Today we combine those same basic elements into the compounds and contraptions made for use by millions. Without abstractions, we are left helpless in the ever-increasing parallel streams of pulsating electrical currents, rushing through specialized, intricately forged rocks that artisan wizards once trapped lightning inside and somehow convinced to think.
	
	But even with all that power at our disposal, the way we use these building blocks matters. Chemistry offers a fitting parallel. Food chemists, for example, have spent decades learning how to repurpose industrial byproducts into stabilizers, textures, preservatives, and anything else that can be quietly slipped into a recipe. Much of this work is impressive and innovative, but some of it is little more than creative waste disposal disguised as convenience: a brilliant hack in the short term and a lingering problem in the long one.
	
	Developers can fall into the same pattern. We learn a new technique or pattern or clever trick and then spend the next year pouring it into every beaker we can find. We are not always discovering better processes. Sometimes we are just repackaging the same product and calling it progress. When that happens, we are not solving problems. We are manufacturing new ones that future maintainers will curse our names over.
	
	A developer must be architect, engineer, mechanic, and driver all at once. It is fine to know how to fix a specific issue, but once that problem is solved, that knowledge should become a building block for solving the next one. If we keep returning to maintain the same solution day after day, then what we built was never a solution at all. It was a slow-burning maintenance burden that we misfiled as "done."
	
	Abstractions exist to reduce complexity, not to multiply it. Their purpose is to lighten the cognitive load, to lift the details off your desk so you can see the shape of the problem in front of you. Terse, repetitive, wire-on-the-floor code that looks like it tumbled out of a [flickering green CRT from 1999 may impress the authors who have stared at machine code long enough to discern hair color from a data stream](https://youtu.be/MvEXkd3O2ow), but it does not serve the broader team or the system that outlives them. Abstractions only do their job when they are aligned with the actual problem being solved, and that brings us to the part many developers skip entirely: modeling your software after the problem you are solving.
	
	## Seeing the Problem Before Solving It
	
	When you build a system, any system, even a disposable script, the first responsibility is understanding why it exists. What problem does it address. Has that problem been solved before. If so, what makes the existing solution insufficient for you now. Understanding that difference is the foundation that everything else must sit on.
	
	I learned this the hard way as a homeowner. My house is old enough to have grounded me if I talked back to it as a teenager. A couple of years ago we went through a near-total remodel. We did some work before and shortly after our daughter was born, but within a year new problems started surfacing. We brought in a structural engineer. The slab foundation was heaving. After some exploration we discovered the culprit: the original cast iron sewage line had split along both the top and bottom, creating pressure changes and settling issues throughout the house.
	
	The fix was not small. We pulled up every inch of flooring. Replaced baseboards. Repaired drywall. Fixed the broken line. Repainted entire sections. Redid trim. Installed piers. Pumped in foundation foam. Cashed in favors. Lost many weekends. And yet, even with all that, it still cost far less than buying an equivalent house in the current market at the current rates.
	
	The lesson is simple. Things are rarely a total loss. Even when a structure looks hopeless, even when someone has effectively set fire to best practices, even when regulations or markets or technologies have shifted dramatically, there are almost always assets worth salvaging inside the wreckage. You should not bulldoze unless you know you have truly exhausted the alternatives.
	
	Before throwing away any system and starting another from scratch, assess what you already have. Understand what is broken, what is sound, and what simply needs reinforcement. Software, like houses, tends to rot in specific places for specific reasons. Understanding those reasons is what separates renovation from reinvention.
	
	## The Nightstand Problem
	
	The same principle applies at a smaller scale. You may already own a perfectly functional house with perfectly functional furniture, yet still want a nightstand you do not currently possess. Your choices are straightforward. You can hope someone has decided to let go of one that happens to meet your criteria. That is the open source gamble. You can buy one, constrained only by budget and whatever definition of quality the manufacturer is committed to that week. Or you can build one yourself, limited only by your skills, imagination, and tolerance for sawdust.
	
	If your goal is personal satisfaction or experimentation, then by all means build the nightstand. But if your goal is to sell or support a product that helps make money, you are no longer just hobby-carpenting. You are operating in the domain of enterprise software.
	
	And when you are building enterprise software, you must view the system from the top down while designing from the bottom up. From the top down, you think about every consumer of your system. In academic terms these are actors. Any system intended to be used, whether by humans or machines, is defined by the interactions between its actors and its responsibilities. Even an autonomous system is both an actor and the architected environment it operates within.
	
	This perspective matters because it forces your abstractions to model the real world rather than some internal taxonomy of clever names. Good abstractions emerge from an understanding of the domain. Bad abstractions emerge from an understanding of a design pattern book.
	
	And if you want maintainability, clarity, and longevity, you always want the first.
	
	## Building from Both Directions
	
	Designing software means working from two directions at once. On one hand, you must understand the behavior your system must exhibit. On the other hand, you must understand the shape of the world it lives in. Systems are not invented whole cloth; they crystallize out of the interactions between intentions and constraints. If you ignore either direction, you end up with something brittle, confused, overbuilt, or perpetually unfinished.
	
	There is nothing sacred about any particular architectural style. Pick Domain-Driven, Clean, Vertical Slice, Hexagonal, Layered, or something entirely different. The choice matters far less than your consistency and your commitment to encapsulating concerns properly. Different problems require different arrangements of the same conceptual ingredients. From high altitude, two domains may look identical. Once you descend toward the details, you often discover that one is a bird and the other is an airplane. The trick is knowing when to zoom out and when to zoom in.
	
	Plenty of developers jump immediately into code, but the outside of the system is always the real beginning. What is it supposed to do. Who uses it. Who does it talk to. Who builds it. Who runs it. Who deploys it. Who monitors it. How do you prove it works. These questions define the problem space, and the problem space determines the boundaries and responsibilities your abstractions must reflect.
	
	Even something as small as a script must obey this reality.
	
	Consider a simple provisioning script. First it reads a certificate from the local filesystem so it can authenticate with a remote host. Next it opens an SFTP connection to a distribution server and retrieves a zip file. Then it extracts the archive to a temporary directory provided by the operating system. Finally it executes whatever installers or configuration commands the archive contains.
	
	On the surface this is straightforward, yet every step is shaped by the environment in which it operates. Tools differ between platforms. Available executables change. File paths and separators vary. Temporary directory locations vary. Even the existence or reliability of SFTP clients varies. None of this means we must implement every possible alternative upfront, but it does mean we should acknowledge the existence of alternatives and avoid designing ourselves into a corner where adding support later requires rewriting the entire script.
	
	This principle scales upward. You may choose to place your application data inside a database, but scattering SQL statements across your codebase is an anti-pattern in nearly any architecture not explicitly about database engines or ORM internals. Unless you are writing an RDBMS, data access is rarely the star of the show. The real substance lives in the application logic that interprets, transforms, regulates, or composes that data. Mixing data access concerns directly into that logic creates friction. Separating them reduces friction, which improves maintainability, which improves confidence, which improves speed.
	
	The guiding question is always the same: does this choice help my system model the problem more clearly, or does it merely model my current implementation?
	If it is the former, great. If it is the latter, you are accumulating technical debt even if the code looks clean.
	
	Abstractions aligned with the domain allow your system to grow gracefully. But abstractions aligned with your tooling force your system to grow awkwardly and inconsistently.
	
	This is the difference between designing from both directions and designing from just one.
	
	## Behavior as the Backbone of Architecture
	
	At some point in every software project, the discussion inevitably turns to architecture. Engineers debate whether they should adopt Domain-Driven Design or Clean Architecture, whether their services ought to be hexagonal, layered, vertical-sliced, modular, or some other fashionable geometric configuration, and whether interfaces belong everywhere or nowhere at all. These conversations are interesting, even entertaining, but they often drift into abstraction for abstraction’s sake. The problem is rarely the patterns themselves; rather, it is that these debates frequently occur in a vacuum, disconnected from the actual behaviors the system must exhibit. Humans love patterns, but software only cares about whether it does the right thing.
	
	The most reliable way to design a system, therefore, is to begin with its behavior. A system exists to do something, and if we do not articulate that something clearly, everything downstream becomes guesswork and improvisation. This is precisely where behavior-driven development demonstrates its value. I explore this more deeply in [BDD: Make the Business Write Your Tests](https://jerrettdavis.com/blog/posts/making-the-business-write-your-tests-with-bdd), but in short, BDD forces us to express the responsibilities of the system in language that is precise, verifiable, and shared by both technical and nontechnical stakeholders. A behavior becomes a specification, a test, a boundary, and a contract all at once.
	
	From an architectural perspective, this shift in thinking is transformative. When we model the largest and most meaningful behaviors first and place an abstraction around them, we create an outer shell that defines the system at a conceptual level. From there, we move inward, breaking behaviors down iteratively into smaller and more specific responsibilities. Each division suggests a natural abstraction, but these abstractions are not arbitrary. They emerge directly from the behavior they represent. They are shaped not by the developer’s preferred patterns but by the needs of the domain itself. This recursive approach ensures that abstractions mirror intent rather than implementation details.
	
	Importantly, this recursion is not fractal. We are not attempting to subdivide reality endlessly. Rather, we refine behaviors only until they are sufficiently well understood to be implemented cleanly. Much as one does not explain quantum chromodynamics to teach someone how to scramble an egg, we do not decompose software beyond what clarity and accuracy require. And while many languages encourage the use of interfaces as the primary mechanism for abstraction, the interface is not the abstraction itself. It is merely a convenient way to enforce a contract. The real abstraction is the conceptual boundary it represents. Whether that boundary is expressed as an interface, a type, a configuration object, or a module is irrelevant as long as the contract is clear and consistent.
	
	This is why starting with abstractions like an `IHost` that orchestrates an `IApplication` works so well. These constructs mirror the system’s highest-level behaviors. Once defined, they allow us to drill inward, step by step, carving out responsibilities until the domain takes shape as a set of interlocking, behavior-aligned components. When abstractions are created this way, they tend to be stable. They align with the problem domain rather than the transient needs of a particular implementation, and therefore they seldom need to change unless the underlying behavior changes.
	
	Frequent modification of an abstraction is a warning sign. A well-formed abstraction typically changes only under three conditions: the business behavior has evolved, an overlooked edge case has surfaced, or the original abstraction contained a conceptual flaw. Outside of those circumstances, the need to repeatedly modify an abstraction usually indicates that its boundaries were drawn incorrectly. When adjusting one behavior forces changes across multiple components, the issue is rarely "too many" or "too few" abstractions in an abstract sense. Instead, it is a failure of alignment. The abstraction does not adequately contain the concerns it is supposed to model, and complexity is leaking out of its container and into the rest of the codebase.
	
	Modern tooling makes this problem even more evident. With the availability of source generators, analyzers, expressive type systems, code scaffolding, and dynamic configuration pipelines, there is increasingly little justification for sprawling boilerplate or repetitive structural code. Boilerplate is not a mark of engineering rigor. It is simply untested and uninteresting glue repeated dozens of times because someone did not take steps to automate it. Good abstractions, by contrast, elevate meaning. They allow the domain to be expressed directly without forcing the developer to wade through noise.
	
	This leads naturally to what I consider the ideal state of modern development: a system that is entirely automated from the moment code touches a repository until the moment it reaches a production-like environment. Compilation, testing, packaging, deployment, orchestration, and infrastructure provisioning should not require human involvement. The only manual step should be expressing intent in the form of new or updated behaviors. Every function that exists within the system should originate as a behavior-driven specification capable of running the entire application inside a controlled test environment, complete with containerized dependencies and UI automation tools such as Playwright. Those same tests should also be able to stub dependencies so the scenarios can run in isolation. When the system itself is treated as the first unit under test, orchestration becomes a priority rather than an afterthought.
	
	Achieving this level of automation depends on stability, and that stability depends on disciplined abstraction. Any element that may vary across environments, including configuration values, credentials, infrastructure, connection details, and policies, must be isolated behind settings and contracts that the application can consume without knowing anything about the environment it runs in. Once this encapsulation is in place, behavior-driven specifications can operate confidently, verifying the correctness of the system from the outside in even while its internal components remain free to evolve.
	
	Finally, it is worth stating explicitly that hand-writing repetitive boilerplate code in a CRUD-heavy application, such as repositories, controllers, mappers, DTOs, validators, or entire edge-to-edge layers, is not admirable craftsmanship. It is busywork. If you have twenty entities with identical structural behavior and you are manually writing twenty sets of nearly identical files, the issue is not insufficient discipline. It is insufficient automation. Whether through source generators, templates, reflection-based pipelines, or dynamic modules, these problems can and should be solved generically. Engineers should focus their manual effort on the places where meaning lives: the domain, the behavior, and the boundaries.
	
	Good abstractions do not eliminate complexity; they contain it. Bad abstractions distribute it. And behavior-driven, problem-first design is how we tell the difference.
	
	## From Story to Spec: Describing Behavior First
	
	To make this concrete, return to our original "Add Middle Initial to User Name Display" ticket. Most teams would handle this with a couple of unit tests directly against whatever `UserNameService` or `UserNameFormatter` happens to exist. The tests would exercise a particular class, call a particular method, and assert on a particular string. That can work, but it starts at the implementation, not at the behavior.
	
	If instead we begin with behavior, the specification sounds more like this:
	
	 When a user has a middle name, show the middle initial between the first and last name.
	 When a user does not have a middle name, omit the gap entirely.
	 When a display style changes (for example, "formal" versus "informal"), the rules about how the middle initial appears should still hold.
	
	That is the contract. It does not mention classes, factories, or strategies. It talks about what the system must do from the outside.
	
	With something like my project [TinyBDD](https://github.com/jerrettdavis/TinyBDD), that kind of behavior becomes executable in a fairly direct way. Using the xUnit adapter, a scenario might look like this:
	
	```csharp
	using TinyBDD.Xunit;
	using Xunit;
	
	[Feature("User name display")]
	public class UserNameDisplayScenarios : TinyBddXunitBase
	{
	    [Scenario("Standard display includes middle initial when present")]
	    [Fact]
	    public async Task MiddleInitialIsRenderedWhenPresent()
	    {
	        await Given("a user with first, middle, and last name", () =>
	                new UserName("Ada", "M", "Lovelace"))
	            .When("formatting the user name for standard display", user =>
	                UserNameDisplay.Standard.Format(user))
	            .Then("the result places the middle initial between first and last", formatted =>
	                formatted == "Ada M. Lovelace")
	            .AssertPassed();
	    }
	
	    [Scenario("Standard display omits missing middle initial")]
	    [Fact]
	    public async Task NoMiddleInitialWhenMissing()
	    {
	        await Given("a user with only first and last name", () =>
	                new UserName("Ada", null, "Lovelace"))
	            .When("formatting the user name for standard display", user =>
	                UserNameDisplay.Standard.Format(user))
	            .Then("no dangling spaces or periods appear", formatted =>
	                formatted == "Ada Lovelace")
	            .AssertPassed();
	    }
	}
	```
	
	In these scenarios, the behavior is the first-class citizen. The test does not care whether you use a `UserNameDisplayStrategyFactory`, a dependency-injected `IUserNameFormatter`, or a static helper hidden in a dusty corner of your codebase. It cares that given a user, when you format their name, you get the right string.
	
	The abstractions are already visible in the code, but only as a side effect of expressing behavior:
	
	 `UserName` represents the domain concept of a person’s name, not a UI or persistence model.
	 `UserNameDisplay.Standard` represents a particular display style that the business cares about.
	 The behavior is encoded in the transition from `UserName` to the formatted string, not in a particular class hierarchy.
	
	Notice what is not present: we do not have separate strategies for every permutation of name structure, locale, and display preference. We have a single coherent abstraction around "displaying a user name in the standard way," and the test drives the rules we actually need.
	
	## Letting Abstractions Fall Out of the Domain
	
	Once you have a behavior-focused spec, the abstractions almost draw themselves. One reasonable implementation might look like this:
	
	```csharp
	public sealed record UserName(
	    string First,
	    string? Middle,
	    string Last);
	
	public interface IUserNameDisplay
	{
	    string Format(UserName name);
	}
	
	public sealed class StandardUserNameDisplay : IUserNameDisplay
	{
	    public string Format(UserName name)
	    {
	        if (!string.IsNullOrWhiteSpace(name.Middle))
	        {
	            return $"{name.First} {name.Middle[0]}. {name.Last}";
	        }
	
	        return $"{name.First} {name.Last}";
	    }
	}
	
	public static class UserNameDisplay
	{
	    public static readonly IUserNameDisplay Standard = new StandardUserNameDisplay();
	}
	```
	
	This is not an argument that every trivial formatting problem deserves an interface and a concrete class. You could inline this logic in a static helper and your tests above would still pass. The point is that the abstraction here is small, meaningful, and directly aligned with the behavior we care about. If later the domain grows to include multiple display styles, cultures, or localization concerns, there is already a clear seam to extend. You can introduce additional `IUserNameDisplay` implementations where and when they are genuinely needed, not because a pattern catalog declared that every noun deserves a factory.
	
	If, however, you discover that adding a new behavior requires touching half the classes in the system, that is a sign you have modeled implementation variants rather than domain concepts. The behavior spec remains constant; the code churn reveals where your abstractions are misaligned.
	
	## Scaling the Same Idea Up to the System Level
	
	So far this is all very local. A name goes in, a formatted string comes out. Real systems have much more interesting behaviors: accepting traffic, orchestrating workflows, integrating with external services, healing from transient failures, deploying safely, and so on.
	
	The same discipline still applies. You can treat the application itself as the unit under test and express its behavior with the same style of specification. A high-level scenario might read something like this:
	
	 Given a configured application host and its dependencies
	 When the host starts
	 Then the public API responds to a health probe
	 And all critical services report healthy
	 And any failing dependency is surfaced clearly rather than silently ignored
	
	As an executable TinyBDD scenario, that might look like:
	
	```csharp
	using TinyBDD.Xunit;
	using Xunit;
	
	[Feature("Application startup and health")]
	public class ApplicationHealthScenarios : TinyBddXunitBase
	{
	    [Scenario("Host starts and exposes a healthy API surface")]
	    [Fact]
	    public async Task HostStartsAndReportsHealthy()
	    {
	        await Given("a test host with default configuration", () =>
	                TestApplicationHost.CreateDefault())
	            .When("the host is started", async host =>
	            {
	                await host.StartAsync();
	                return host;
	            })
	            .Then("the health endpoint returns OK", async host =>
	                await AssertHealthEndpointOk(host, "/health"))
	            .And("all critical health checks pass", async host =>
	                await AssertCriticalChecksPass(host))
	            .AssertPassed();
	    }
	
	    private static Task AssertHealthEndpointOk(TestApplicationHost host, string path)
	    {
	        // This could exercise a real HTTP endpoint against a TestServer or containerized instance.
	        // The assertion lives here, but the behavior is defined in the scenario above.
	        throw new NotImplementedException();
	    }
	
	    private static Task AssertCriticalChecksPass(TestApplicationHost host)
	    {
	        // Could query IHealthCheckPublisher, metrics, logs, or an in-memory probe endpoint.
	        throw new NotImplementedException();
	    }
	}
	```
	
	The implementation details behind `TestApplicationHost` are intentionally omitted here, because they are not the main point. What matters is that at the boundary, we are still describing behavior: the host starts, the API responds, health checks pass. Internally, `TestApplicationHost` can wrap an `IHost`, use Testcontainers, spin up a `WebApplicationFactory`, or compose a full stack in Docker. The abstraction exists to let the behavior remain stable while infrastructure details evolve.
	
	This is the same pattern you used on the small scale with `UserNameDisplay`, only now it operates at the level of the entire application. The outermost abstraction represents the system as it is experienced from the outside. Everything underneath exists to satisfy that experience.
	
	## Declarative Core, Automated Edge
	

Report exceeded GitHub limit of 65535 bytes and has been trimmed

Annotations

Check failure on line 50 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post detail page loads successfully

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected title not to be <null> because Post should have a title.
Raw output
Expected title not to be <null> because Post should have a title.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenIShouldSeeThePostTitle() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 50
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostDetailPageLoadsSuccessfully() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 15
--- End of stack trace from previous location ---

Check failure on line 78 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post displays full content

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected content "<html><head></head><body></body></html>" to contain "<article" because Post content should be rendered as article.
Raw output
Expected content "<html><head></head><body></body></html>" to contain "<article" because Post content should be rendered as article.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Primitives.StringAssertions`1.Contain(String expected, String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenThePostHtmlContentShouldBeRendered() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 78
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostDisplaysFullContent() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 23
--- End of stack trace from previous location ---

Check failure on line 105 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post displays metadata

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected date not to be <null> because Post should display publish date.
Raw output
Expected date not to be <null> because Post should display publish date.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenIShouldSeeThePublishDate() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 105
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostDisplaysMetadata() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 32
--- End of stack trace from previous location ---

Check failure on line 235 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post displays reading time

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected readingTime not to be <null> because Post should display reading time.
Raw output
Expected readingTime not to be <null> because Post should display reading time.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenIShouldSeeTheEstimatedReadingTime() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 235
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostDisplaysReadingTime() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 78
--- End of stack trace from previous location ---

Check failure on line 137 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post has proper meta tags for SEO

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected title not to be <null> or empty because Page should have a title, but found "".
Raw output
Expected title not to be <null> or empty because Page should have a title, but found "".
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Primitives.StringAssertions`1.NotBeNullOrEmpty(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenThePageShouldHaveATitleMetaTag() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 137
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostHasProperMetaTagsForSEO() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 47
--- End of stack trace from previous location ---

Check failure on line 55 in JdhPro.Tests.E2E/Features/BlogPost.feature

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post has table of contents

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
  using System;
  using Reqnroll;
  
  namespace MyNamespace
  {
      [Binding]
      public class StepDefinitions
      {
          private readonly IReqnrollOutputHelper _outputHelper;
  
          public StepDefinitions(IReqnrollOutputHelper outputHelper)
          {
              _outputHelper = outputHelper;
          }
          [When(@"^I navigate to the blog post detail page$")]
          public void WhenINavigateToTheBlogPostDetailPage()
          {
              throw new PendingStepException();
          }
      }
  }
  
Raw output
Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
using System;
using Reqnroll;

namespace MyNamespace
{
    [Binding]
    public class StepDefinitions
    {
        private readonly IReqnrollOutputHelper _outputHelper;

        public StepDefinitions(IReqnrollOutputHelper outputHelper)
        {
            _outputHelper = outputHelper;
        }
        [When(@"^I navigate to the blog post detail page$")]
        public void WhenINavigateToTheBlogPostDetailPage()
        {
            throw new PendingStepException();
        }
    }
}

   at Reqnroll.xUnit.ReqnrollPlugin.XUnitRuntimeProvider.TestInconclusive(String message)
   at Reqnroll.ErrorHandling.ErrorProvider.ThrowPendingError(ScenarioExecutionStatus testStatus, String message)
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostHasTableOfContents() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 55
--- End of stack trace from previous location ---

Check failure on line 220 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Blog post URL uses slug

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected matchingPost not to be <null> because URL should contain a valid post slug.
Raw output
Expected matchingPost not to be <null> because URL should contain a valid post slug.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenTheUrlShouldContainThePostSlug() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 220
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.BlogPostURLUsesSlug() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 71
--- End of stack trace from previous location ---

Check failure on line 94 in JdhPro.Tests.E2E/Features/BlogPost.feature

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Code snippets are properly formatted

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
  using System;
  using Reqnroll;
  
  namespace MyNamespace
  {
      [Binding]
      public class StepDefinitions
      {
          private readonly IReqnrollOutputHelper _outputHelper;
  
          public StepDefinitions(IReqnrollOutputHelper outputHelper)
          {
              _outputHelper = outputHelper;
          }
          [When(@"^I navigate to the blog post detail page$")]
          public void WhenINavigateToTheBlogPostDetailPage()
          {
              throw new PendingStepException();
          }
          
          [Then(@"^code blocks should have syntax highlighting$")]
          public void ThenCodeBlocksShouldHaveSyntaxHighlighting()
          {
              throw new PendingStepException();
          }
      }
  }
  
Raw output
Reqnroll.xUnit.ReqnrollPlugin.XUnitInconclusiveException : Test inconclusive: No matching step definition found for one or more steps.
using System;
using Reqnroll;

namespace MyNamespace
{
    [Binding]
    public class StepDefinitions
    {
        private readonly IReqnrollOutputHelper _outputHelper;

        public StepDefinitions(IReqnrollOutputHelper outputHelper)
        {
            _outputHelper = outputHelper;
        }
        [When(@"^I navigate to the blog post detail page$")]
        public void WhenINavigateToTheBlogPostDetailPage()
        {
            throw new PendingStepException();
        }
        
        [Then(@"^code blocks should have syntax highlighting$")]
        public void ThenCodeBlocksShouldHaveSyntaxHighlighting()
        {
            throw new PendingStepException();
        }
    }
}

   at Reqnroll.xUnit.ReqnrollPlugin.XUnitRuntimeProvider.TestInconclusive(String message)
   at Reqnroll.ErrorHandling.ErrorProvider.ThrowPendingError(ScenarioExecutionStatus testStatus, String message)
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.CodeSnippetsAreProperlyFormatted() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 94
--- End of stack trace from previous location ---

Check failure on line 202 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Navigate between blog posts

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected link not to be <null> because Next Post link should exist.
Raw output
Expected link not to be <null> because Next Post link should exist.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.WhenIClickLinkInBlogPost(String linkText) in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 202
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.NavigateBetweenBlogPosts() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 64
--- End of stack trace from previous location ---

Check failure on line 255 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Related posts are displayed

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected relatedPosts not to be <null> because Post should have related posts section.
Raw output
Expected relatedPosts not to be <null> because Post should have related posts section.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenIShouldSeeRelatedPostsSection() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 255
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.RelatedPostsAreDisplayed() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 85
--- End of stack trace from previous location ---

Check failure on line 119 in JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs

See this annotation in the file changed.

@github-actions github-actions / Test Results

JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature ► Syndicated posts display canonical URL

Failed test found in:
  ./test-results/e2e/artifacts/test-results/e2e-tests.trx
Error:
  Expected canonicalLink not to be <null> because Syndicated post should have a canonical URL link.
Raw output
Expected canonicalLink not to be <null> because Syndicated post should have a canonical URL link.
   at FluentAssertions.Execution.LateBoundTestFramework.Throw(String message)
   at FluentAssertions.Execution.DefaultAssertionStrategy.HandleFailure(String message)
   at FluentAssertions.Execution.AssertionChain.FailWith(Func`1 getFailureReason)
   at FluentAssertions.Execution.AssertionChain.FailWith(String message)
   at FluentAssertions.Primitives.ReferenceTypeAssertions`2.NotBeNull(String because, Object[] becauseArgs)
   at JdhPro.Tests.E2E.StepDefinitions.BlogPostSteps.ThenIShouldSeeACanonicalUrlLink() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/StepDefinitions/BlogPostSteps.cs:line 119
   at Reqnroll.Bindings.AsyncMethodHelper.ConvertTaskOfT(Task task, Boolean getValue)
   at Reqnroll.Bindings.BindingDelegateInvoker.InvokeDelegateAsync(Delegate bindingDelegate, Object[] invokeArgs, ExecutionContextHolder executionContext)
   at Reqnroll.Bindings.BindingInvoker.InvokeBindingAsync(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.ExecuteStepMatchAsync(BindingMatch match, Object[] arguments, DurationHolder durationHolder)
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>b__7>d.MoveNext()
--- End of stack trace from previous location ---
   at Reqnroll.Infrastructure.TestExecutionEngine.<>c__DisplayClass57_0.<<ExecuteStepAsync>g__HandleStepExecutionExceptions|0>d.MoveNext()
   at Reqnroll.Infrastructure.TestExecutionEngine.OnAfterLastStepAsync()
   at Reqnroll.TestRunner.CollectScenarioErrorsAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.ScenarioCleanupAsync()
   at JdhPro.Tests.E2E.Features.BlogPostDetailPageFeature.SyndicatedPostsDisplayCanonicalURL() in /home/runner/work/JDHPro/JDHPro/JdhPro.Tests.E2E/Features/BlogPost.feature:line 39
--- End of stack trace from previous location ---