Core-Driven Architecture: The Best

Today I’m here to talk about what, in my opinion, is the ideal architecture for an application.

Unlike other posts where I try to be completely objective, here you'll see my personal opinion—why I like it and what benefits it brings or doesn't, as many of you keep asking which architecture I actually use.

 

 

 

1 - What is Core-Driven Architecture?

 

Please note I’m speaking from the context of professional environments or large-scale applications. If you just want to write a small script or a piece of functionality that does X, I wouldn’t do it this way. For that, just create a script.

 

But let’s cut to the chase. On this channel, we’ve looked at different architectures: MVC, clean, vertical slice, or hexagonal. While it’s true that I don’t use any single one by the book, what I do is mix a bit of everything so I can work comfortably—after all, that’s what really matters.

 

This architecture will probably look 90% similar to another one, but the reality is that there are so many that it’s pointless to argue about names—I actually made this one up as I wrote the post. What matters is not the name, but the concept.

 

 

 

2 - Separation of Responsibilities in a Core-Driven Architecture

 

What I keep the strictest is the separation of responsibilities.

 

By this I mean that in every application, I’ll have different layers, and each layer has a clear responsibility.

 

It’s a mix of clean, hexagonal, and layered architecture, because we have a layer where business is the most important thing (CA), and then we use dependencies through interfaces (ports and adapters from hexagonal), in addition to the separation into layers and inward directionality shared with layered.

 

For example, in an API, you’d have something like this:

core-driven architecture example

If you’re working in C#, you can use folders or separate projects within a solution. Personally, I don’t mind which, as long as they’re separated and there’s a clear division.

 

 

2.1 - Application Entry Point

 

As you can see, the endpoint is really just a proxy between the call and the use case we’re going to execute. The endpoint’s job is to route and check authorization. Everything to do with the request pipeline and API configuration, like OpenAPI, should be handled here. In summary, only API-related elements belong at this point.

 

So if instead of an API, you have a distributed architectures consumer, the only change is that you won’t have an endpoint launching the action, but a handler reading an event, checking that it hasn’t already been processed, etc.

core-driven architecture entry point

Exactly the same thing applies to the UI: if you’re using MVC, maybe the interface itself is what triggers the relevant controller call.

The important thing is to understand that this layer is the entry point from outside into our application and acts as such.

 

 

 

2.2 - Use Case Layer

 

The middle layer is the most important, because it contains the business logic layer and that’s what we really need to test. This layer does all the necessary checks, all the actions required by your use case.

use case layer in core-driven architecture

For example, if we’re creating clients in a database, we validate all the data, insert them, and as a final step publish an event indicating that a new client has been created. All these actions happen within this use case.

 

To me, it’s important that this layer implements the single responsibility principle. That means each use case is responsible for one action.

One action doesn’t just mean validating or just inserting into the database; it refers to all the business rules required for something to happen. So creating a client will have one use case, and updating a client will have a different one. In C#, this means multiple classes, not one massive do-everything class.

 

This means that by nature, the API will comply with CQRS, separating our application’s reads from writes.

 

And each use case contains everything it needs to work. For example, if it uses the database, we inject the database, whether that’s the dbcontext, or a repository if you use repository pattern or unit of work. If at the end we send an event, we also inject the interface that will dispatch those events:

public class AddVehicle(IDatabaseRepository databaseRepository,     IEventNotificator eventNotificator){    public async Task<Result<VehicleDto>> Execute(CreateVehicleRequest request)    {        VehicleEntity vehicleEntity =  await databaseRepository.AddVehicle(request);        var dto = vehicleEntity.ToDto();        await eventNotificator.Notify(dto);        return dto;    }}

In this part, I use the same logic that hexagonal architecture uses with ports and adapters.

 

This use case layer is where a lot of people implementing Clean Architecture introduce the mediator pattern. If you read my post about Clean Architecture, you’ll know my opinion about the mediator pattern—personally I don’t use it because it doesn’t really add anything, especially when used poorly (handlers calling other handlers). So what I do is, as I said, have one class per use case/action, then a class per “group” to wrap them.

 

public record class VehiclesUseCases(    AddVehicle AddVehicle,     GetVehicle GetVehicle);

And even though the code is more coupled, I don’t see it as a problem. It’s a microservice, after all, and there’s no real drawback.

 

As a rule of thumb, I don’t use interfaces in this layer, meaning I inject the concrete classes into the dependency container. The reason is simple: interfaces don’t add value in this layer.

 

 

2.3 - External Elements

 

Finally, the last of the layers is where I define all the application’s external elements. Here is where we see the reasons for using async/await, since we’re going to communicate with external elements.

external access layer in core-driven architecture

In my particular workflow, I usually split this layer into different projects within the same solution, to ensure a clear separation. For example, I create a project called Data for everything database-related; whether you use repository pattern or dbcontext, it will be here, along with your database entities.

 

If I use RabbitMQ for event communication, all the RabbitMQ setup and implementation will be located in that particular project.

 

As you can imagine, all access to infrastructure or external services goes here. You can use either projects or folders, depending on how much you're dealing with and your personal preferences or organizational standards.

 

 

 

2.4 - Dependency Injection

This architecture is heavily based on dependency injection, since we’ll be injecting all elements into higher layers.

 

For example, I inject the use cases into the controller and the database into the use cases. So far, so normal—but what I also do is declare all the elements that need to be injected in the project where they're defined.

 

So within my use cases project, I have a static class with a single public method called AddUseCases, but I also have a private method for each group of elements to register. Here’s the result:

public static class UseCasesDependencyInjection{    public static IServiceCollection AddUseCases(this IServiceCollection services)        => services.AddVehicleUseCases();        private static IServiceCollection AddVehicleUseCases(this IServiceCollection services)        => services.AddScoped<VehiclesUseCases>()            .AddScoped<AddVehicle>()            .AddScoped<GetVehicle>();}///this in program.csbuilder.Services    .AddUseCases() 👈    .AddData()    .AddNotificator();

Then in the upper layer (API), we just call AddUseCases.

 

Something else to consider: this setup is simplified to make working with dependencies faster and easier. Five years ago when I started my website, I created this library on GitHub and on nuget that lets you declare, in the dependency’s project, which modules you’ll need and checks whether they’re already injected—if not, it fails. The idea is good and it works (at least until NET5), but I don’t think it’s worth it anymore.

Although you could do something like

 var serviceProvider = new ServiceCollection()      .ApplyModule(UseCases.DiModule)	  .ApplyModule(Database.DiModule)      .BuildServiceProvider();// in UseCases:.AddScoped<UseCaseX>.RequireModule(Database.DiModule);

What I do now is evolve towards simplicity.

build your project vs use an existing one

 

 

2.5 - Best Practices in Core-Driven Architecture

 

To sum up, I'll include some preferences I have regarding how I build applications.

 

Personally, I’ve used the Result<T> pattern for more than five years—even though it’s trendier now. The reason is it lets me have an object that contains two states: success and failure. Then in the API, I can map to a ProblemDetails with the correct HTTP code.

 

 

Unless the application is very small, I always use “normal” controllers, not minimal APIs, because it’s much better for OpenAPI compatibility. We’ll have a post about that soon.

Use cases will always return a DTO, which can safely leave the application. In use cases, you can use entities, but you should never return an entity from a use case. DTO vs entity difference. Finally, I put DTOs in a separate project so I can create a NuGet package if needed.

 

 

Not all APIs should be Backend For Frontend. In this context, BFF means an API that receives a call and returns all the necessary information. For example, imagine your vehicle API where you create vehicle properties such as make, doors, color, etc.

 

The number of vehicles in the warehouse is part of the inventory service, not vehicles. Therefore, if you want to show the number of available vehicles with their names in the UI, you have a few options.

1 - Call the inventory API from within the vehicle API to check how many are available

2- Create a BFF app that aggregates info from both services (or use federated GraphQL)

3 - Have the UI make both calls

 

 

From my perspective, stock information shouldn’t be part of the vehicles API domain, so the first option shouldn’t be valid. Choosing between options two and three depends on the user experience you want to provide.

 

When we talk about using CQRS and separating reads and writes, it doesn’t mean you can only query the database—the database operations are irrelevant. What matters is, for the consumer of your use case, if you call GetVehicle, you should return a vehicle and not make any database modifications. Just common sense.

 

 

 

3 - Testing in a Core-Driven Architecture

 

I know what you’re thinking: tests aren’t part of an application’s architecture or whatever.

 

But the reality is tests are necessary, so I wanted to include a quick section here. Ideally, we’d do all kinds of tests and cover everything—but that’s not always realistic. However, because of the way we’ve designed the application, it’s really easy to test our use cases, which are the core of our application.

 

 

As you’ve seen throughout the post, each use case has a single entry point, which means we’ll only have one method to test. This doesn’t mean we should write just one test—we’ll have one test for each possible outcome of our use case. If you're using exceptions for validation, you should test those exceptions. If you use Result<T> you should test every possibility.

public class AddVehicleTests{    private class TestState    {        public Mock<IDatabaseRepository> DatabaseRepository { get; set; }        public AddVehicle Subject { get; set; }        public Mock<IEventNotificator> EventNotificator { get; set; }        public TestState()        {            DatabaseRepository = new Mock<IDatabaseRepository>();            EventNotificator = new Mock<IEventNotificator>();            Subject = new AddVehicle(DatabaseRepository.Object, EventNotificator.Object);        }    }    [Fact]    public async Task WhenVehicleRequestHasCorrectData_thenInserted()    {        TestState state = new();        string make = "opel";        string name = "vehicle1";        int id = 1;        state.DatabaseRepository.Setup(x => x                .AddVehicle(It.IsAny<CreateVehicleRequest>()))            .ReturnsAsync(new VehicleEntity() { Id = id, Make = make, Name = name });        var result = await state.Subject            .Execute(new CreateVehicleRequest() { Make = make, Name = name });        Assert.True(result.Success);        Assert.Equal(make, result.Value.Make);        Assert.Equal(id, result.Value.Id);        Assert.Equal(name, result.Value.Name);        state.EventNotificator.Verify(a =>            a.Notify(result.Value), Times.Once);    }}

 

While it’s important to test every output, the most important is testing the happy path, in other words: the path the code follows when everything goes right.

 

As you can see, I use Moq as my mock library, though there are other alternatives.

 

 

I also usually create a class that acts as a “base” for the happy path and contains the dependencies that will be used.

 

And then each test describes in the name what it does and what it verifies.

 

This post was translated from Spanish. You can see the original one here.
If there is any problem you can add a comment bellow or contact me in the website's contact form

© copyright 2025 NetMentor | Todos los derechos reservados | RSS Feed

Buy me a coffee Invitame a un café