Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

The Standard Architecture of DotNet Applications

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

0 Introduction

This is The Standard. A collection of decades of experience in the engineering


industry. I authored it for you to find your way in the wide ocean of knowledge. The
Standard is not perfect, and never will be. It reflects the ongoing evolution in the
engineering industry. Whilst it may be written by one person, it is actually the
collection of thoughts from hundreds of engineers that I've had the honor to interact
with throughout my life and learn from.

The Standard holds hundreds of years of collective experiences from so many


different engineers. As I travelled the world and worked in so many industries, I've
had the chance to work with so many different types of engineers - some of them
were mad scientists who would bother with the smallest details of every single
routine. And some others are business-engineers who care more about the end results
than the means to get to these results. Among many others, I've learned from all of
them what makes a simple engineering guide, that can light up the way for all other
engineers to be inspired by it and hopefully follow it. And therefore, I made this
Standard hoping for it to be a compass for all of us engineers to use to find the best
way to engineer solutions that would hopefully change the world.

This Standard is an appeal to the engineers all over the world, to read through it and
make extracts of their experiences and knowledge to enrich an engineering Standard
worthy of the industry of software. We live today knowing the origins of the earth,
of man and all the animals. We know how hot boiling water is. How long a yard. Our
ships' masters know the precise measurements of latitude and longitude. Yet we have
neither chart nor compass to guide us through the wide sea of code. The time has
come to accord this great craft of ours the same dignity and respect as the other
standards defined by science.

The value of this Standard is so immense to those who are still finding their way in
this industry, or even those who lost their way to guide them towards a better future.
But more importantly The Standard is written for everyone, equally, to inspire every
engineer or engineer-to-be to look forward to focusing on what matters the most
about engineering. its purpose not its technicalities. I have realized when engineers
have any form of standard, they start focusing more on what can be accomplished in
our world today. And when a team of engineers follow some form of standard, their
energy and focus become more about what can be accomplished not how it should
be accomplished.

I collected then authored this Standard hoping it will eliminate much of the
confusion, and focus the efforts of the engineers on what matters the most. Using
technology as a mean to have higher purposes and establish its equivalent of goals.
Designing software has come along way, and it has proven itself to be one of the
most powerful tools a person could have today. This craft deserves a proper way to
be introduced to the world, and to be educated to the youth.

At its essence, The Standard is my intrepretation of SOLID principles along with


many other practices and patterns that continue to enrich our designs and
development to achieve truly solid systems. The Standard is aiming to help every
engineer find a guidance in doing their day to day work. But more importantly ensure
every engineer that when they need to build rugged systems that can land on the
moon, solve the most complex problems and ensure the survival of the human kind
and its evolution.

The Standard is also a work of love to the rest of the world. Driven and written with
a passion to enhance the engineering experience and producing efficient, rugged,
configurable, pluggable and reliabale systems that can withstand any challenges or
changes as it occurs almost on daily basis in our industry.
1 Brokers
1.0 Introduction

Brokers play the role of a liaison between the business logic and the outside world.
They are wrappers around any external libraries, resources or APIs to satisfy a local
interface for the business to interact with these resources without having to be tightly
coupled with any particular resources or external library implementation.

Brokers in general are meant to be disposable and replaceable - they are built with
the understanding that technology evolves and changes all the time and therefore
they shall be at some point in time in the lifecycle of a given application be replaced
with a modern technology that gets the job done faster.

But Brokers also ensure that your business is pluggable by abstracting away any
specific external resource dependencies from what your software is actually trying
to accomplish.

1.1 On The Map

In any given application, mobile, desktop, web or simply just an API - brokers
usually reside at the "tail" of any app - that's because they are the last point of contact
between our custom code and the outside world.

Whether the outside world in this instance is just simply a local storage in memory,
or an entirely independent system that resides behind an API, they all have to reside
behind the Brokers in any application.

In the following low-level architecture for a given API - Brokers reside between our
business logic and the external resource:
1.2 Characteristics

There are few simple rules that govern the implementation of any broker - these rules
are:

1.2.0 Implements a Local Interface

Brokers have to satisfy a local contract. they have to implement a local interface to
allow the decoupling between their implementation and the services that consume
them.

For instance, given that we have a local contract IStorageBroker that requires an
implementation for any given CRUD operation for a local model Student - the
contract operation would be as follows:

public partial interface IStorageBroker


{
ValueTask<Student> InsertStudentAsync(Student student);
}

An implementation for a storage broker would be as follows:

public partial class StorageBroker


{
public DbSet<Student> Students { get; set; }

public IQueryable<Student> SelectAllStudents() =>


this.Students.AsQueryable();
}

A local contract implementation can be replaced at any point in time from utilizing
the Entity Framework as shows in the previous example, to using a completely
different technology like Dapper, or an entirely different infrastructure like an Oracle
or PostgreSQL database.

1.2.1 No Flow Control

Brokers should not have any form of flow-control such as if-statements, while-loops
or switch cases - that's simply because flow-control code is considered to be business
logic, and it fits better the services layer where business logic should reside not the
brokers.

For instance, a broker method that retrieves a list of students from a database would
look something like this:

public IQueryable<Student> SelectAllStudents() =>


this.Students.AsQueryable();

A simple fat-arrow function that calls the native EntityFramework DbSet<T> and
return a local model like Student.

1.2.2 No Exception Handling

Exception handling is somewhat a form of flow-control. Brokers are not supposed to


handle any exceptions, but rather let the exception propagate to the broker-
neighboring services where these exceptions are going to be properly mapped and
localized.
1.2.3 Own Their Configurations

Brokers are also required to handle their own configurations - they may have a
dependency injection from a configuration object, to retrieve and setup the
configurations for whichever external resource they are integrating with.

For instance, connection strings in database communications are required to be


retrieved and passed in to the database client to establish a successful connection, as
follows:

public partial class StorageBroker : EFxceptionsContext, IStorageBroker


{
private readonly IConfiguration configuration;

public StorageBroker(IConfiguration configuration)


{
this.configuration = configuration;
this.Database.Migrate();
}

protected override void OnConfiguring(DbContextOptionsBuilder


optionsBuilder)
{
string connectionString =
this.configuration.GetConnectionString("DefaultConnection");
optionsBuilder.UseSqlServer(connectionString);
}
}

1.2.4 Natives from Primitives

Brokers may construct an external model object based on primitive types passed from
the broker-neighboring services.
For instance, in e-mail notifications broker, input parameters for
a .Send(...) function for instance require the basic input parameters such as the
subject, content or the address for instance, here's an example:

public async ValueTask SendMailAsync(List<string> recipients, string


subject, string content)
{
Message message = BuildMessage(recipients, ccRecipients, subject,
content);
await SendEmailMessageAsync(message);
}

The primitive input parameters will ensure there are no strong dependencies between
the broker-neighboring services and the external models. Even in situations where
the broker is simply a point of integration between your application and an external
RESTful API, it's very highly recommended that you build your own native models
to reflect the same JSON object sent or returned from the API instead of relying on
nuget libraries, dlls or shared projects to achieve the same goal.

1.2.5 Naming Conventions

The contracts for the brokers shall remain as generic as possible to indicate the
overall functionality of a broker, for instance we say IStorageBroker instead
of ISqlStorageBroker to indicate a particular technology or infrastructure.

But in case of concrete implementations of brokers, it all depends on how many


brokers you have providing similar functionality, in case of having a single storage
broker, it might be more convenient to to maintain the same name as the contract -
in our case here a concrete implementation of IStorageBroker would
be StorageBroker.

However, if your application supports multiple queues, storages or e-mail service


providers you might need to start be specifying the overall target of the component,
for instance, an IQueueBroker would have multiple implementations such
as NotificationQueueBroker and OrdersQueueBroker.

But if the concrete implementations target the same model and business value, then
a diversion to the technology might be more befitting in this case, for instance in the
case of an IStorageBroker two different concrete implementations would
be SqlStorageBroker and MongoStroageBroker this case is very possible in situations
where environment costs are reduced in lower than production infrastructure for
instance.

1.2.6 Language

Brokers speak the language of the technologies they support. For instance, in a
storage broker, we say SelectById to match the SQL Select statement and in a queue
broker we say Enqueue to match the language.

If a broker is supporting an API endpoint, then it shall follow the RESTFul operations
language, such as POST, GET or PUT, here's an example:

public async ValueTask<Student> PostStudentAsync(Student student) =>


await this.PostAsync(RelativeUrl, student);

1.2.7 Up & Sideways

Brokers cannot call other brokers. that's simply because brokers are the first point of
abstraction, they require no additional abstractions and no additional dependencies
other than a configuration access model.

Brokers can't also have services as dependencies as the flow in any given system
shall come from the services to the brokers and not the other way around.
Even in situations where a microservice has to subscribe to a queue for instance,
brokers will pass forward a listener method to process incoming events, but not call
the services that provide the processing logic.

The general rule here then would be, that brokers can only be called by services, and
they can only call external native dependencies.

1.3 Organization

Brokers that support multiple entities such as Storage brokers should leverage partial
classes to break down the responsibilities per entities.

For instance, if we have a storage broker that provides all CRUD operations for
both Student and Teacher models, then the organization of the files should be as
follows:

 IStorageBroker.cs
o IStorageBroker.Students.cs
o IStorageBroker.Teachers.cs
 StorageBroker.cs
o StorageBroker.Students.cs
o StorageBroker.Teachers.cs

The main purpose of this particular organization leveraging partial classes is to


separate the concern for each entity to even a finer level, which should make the
maintainability of the software much higher.

But brokers files and folders naming convention strictly focuses on the plurality of
the entities they support and the singularity for the overall resource being supported.

For instance, we say IStorageBroker.Students.cs. and we also


say IEmailBroker or IQueueBroker.Notifications.cs - singular for the resource and
plural entities.
The same concept applies to the folders or namespaces containing these brokers.

For instance, we say:

namespace OtripleS.Web.Api.Brokers.Storages
{
...
}

And we say:

namespace OtripleS.Web.Api.Brokers.Queues
{
...
}

1.4 Broker Types

In most of the applications built today, there are some common Brokers that are
usually needed to get an enterprise application up and running - some of these
Brokers are like Storage, Time, APIs, Logging and Queues.

Some of these brokers interact with existing resources on the system such as time to
allow broker-neighboring services to treat time as a dependency and control how a
particular service would behave based on the value of time at any point in the past,
present or the future.

1.4.0 Entity Brokers

Entity brokers are the brokers providing integration points with external resources
that the system needs to fulfill a business requirements.

For instance, entity brokers are brokers that integrate with storage, providing
capabilities to store or retrieve records from a database.
entity brokers are also like queue brokers, providing a point of integration to push
messages to a queue for other services to consume and process to fulfill their business
logic.

Entity brokers can only be called by broker-neighboring services, simply because


they require a level of validation that needs to be performed on the data they receive
or provide before proceeding any further.

1.4.1 Support Brokers

Support brokers are general purpose brokers, they provide a functionality to support
services but they have no characteristic that makes them different from one system
or another.

A good example of support brokers is the DateTimeBroker - a broker made


specifically to abstract away the business layer strong dependency on the system date
time.

Time brokers don't really target any specific entity, and they are almost the same
across many systems out there.

Another example of support brokers is the LoggingBroker - they provide data to


logging and monitoring systems to enable the system's engineers to visualize the
overall flow of data across the system, and be notified in case any issues occur.

Unlike Entity Brokers - support brokers may be called across the entire business
layer, they may be called on foundation, processing, orchestration, coordination,
management or aggregation services. that's because logging brokers are required as
a supporting component in the system to provide all the capabilities needed for
services to log their errors or calculate a date or any other supporting functionality.

You can find real-world examples of brokers in the OtripleS project here.
1.5 Implementation

Here's a real-life implementation of a full storage broker for all CRUD operations
for Student entity:

For IStorageBroker.cs:
namespace OtripleS.Web.Api.Brokers.Storage
{
public partial interface IStorageBroker
{
}
}

For StorageBroker.cs:
using System;
using EFxceptions.Identity;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using OtripleS.Web.Api.Models.Users;

namespace OtripleS.Web.Api.Brokers.Storage
{
public partial class StorageBroker : EFxceptionsContext, IStorageBroker
{
private readonly IConfiguration configuration;

public StorageBroker(IConfiguration configuration)


{
this.configuration = configuration;
this.Database.Migrate();
}

protected override void OnConfiguring(DbContextOptionsBuilder


optionsBuilder)
{
string connectionString =
this.configuration.GetConnectionString("DefaultConnection");
optionsBuilder.UseSqlServer(connectionString);
}
}
}

For IStorageBroker.Students.cs:
using System;
using System.Linq;
using System.Threading.Tasks;
using OtripleS.Web.Api.Models.Students;

namespace OtripleS.Web.Api.Brokers.Storage
{
public partial interface IStorageBroker
{
public ValueTask<Student> InsertStudentAsync(Student student);
public IQueryable<Student> SelectAllStudents();
public ValueTask<Student> SelectStudentByIdAsync(Guid studentId);
public ValueTask<Student> UpdateStudentAsync(Student student);
public ValueTask<Student> DeleteStudentAsync(Student student);
}
}

For StorageBroker.Students.cs:
using System;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using OtripleS.Web.Api.Models.Students;

namespace OtripleS.Web.Api.Brokers.Storage
{
public partial class StorageBroker
{
public DbSet<Student> Students { get; set; }

public async ValueTask<Student> InsertStudentAsync(Student student)


{
EntityEntry<Student> studentEntityEntry = await
this.Students.AddAsync(student);
await this.SaveChangesAsync();

return studentEntityEntry.Entity;
}

public IQueryable<Student> SelectAllStudents() =>


this.Students.AsQueryable();

public async ValueTask<Student> SelectStudentByIdAsync(Guid


studentId)
{
this.ChangeTracker.QueryTrackingBehavior =
QueryTrackingBehavior.NoTracking;

return await Students.FindAsync(studentId);


}

public async ValueTask<Student> UpdateStudentAsync(Student student)


{
EntityEntry<Student> studentEntityEntry =
this.Students.Update(student);
await this.SaveChangesAsync();

return studentEntityEntry.Entity;
}

public async ValueTask<Student> DeleteStudentAsync(Student student)


{
EntityEntry<Student> studentEntityEntry =
this.Students.Remove(student);
await this.SaveChangesAsync();

return studentEntityEntry.Entity;
}
}
}
1.6 Summary

Brokers are the first layer of abstraction between your business logic and the outside
world, but they are not the only layer of abstraction. simply because there will still
be few native models that leak through your brokers to your broker-neighboring
services which is natural to avoid doing any mappings outside of the realm of logic,
in our case here the foundation services.

For instance, in a storage broker, regardless what ORM you are using, some native
exceptions from your ORM (EntityFramework for instance) will occur, such
as DbUpdateException or SqlException - in that case we need another layer of
abstraction to play the role of a mapper between these exceptions and our core logic
to convert them into local exception models.

This responsibility lies in the hands of the broker-neighboring services, I also call
them foundation services, these services are the last point of abstraction before your
core logic, in which everything becomes nothing but local models and contracts.

1.7 FAQs

During the course of time, there have been some common questions that arose by the
engineers that I had the opportunity to work with throughout my career - since some
of these questions reoccurred on several occasions, I thought it might be useful to
aggregate all of them in here for everyone to learn about some other perspectives
around brokers.

1.7.0 Is the brokers pattern the same as the repository pattern?

Not exactly, at least from an operational standpoint, brokers seems to be more generic
than repositories.
Repositories usually target storage-like operations, mainly towards databases. but
brokers can be an integration point with any external dependency such as e-mail
services, queues, other APIs and such.

A more similar pattern for brokers is the Unit of Work pattern, it mainly focuses on
the overall operation without having to tie the definition or the name with any
particular operation.

All of these patterns in general try to achieve the same SOLID principles goal, which
is the separation of concern, dependency injection and single responsibility.

But because SOLID are principles and not exact guidelines, it's expected to see all
different kinds of implementations and patterns to achieve that principle.

1.7.1 Why can't the brokers implement a contract for methods that return an
interface instead of a concrete model?

That would be an ideal situation, but that would also require brokers to do a
conversion or mapping between the native models returned from the external
resource SDKs or APIs and the internal model that adheres to the local contract.

Doing that on the broker level will require pushing business logic into that realm,
which is outside of the purpose of that component completely.

Brokers do not get unit tested because they have no business logic in them, they may
be a part of an acceptance or an integration test, but certainly not a part of unit level
tests - simply because they don't contain any business logic in them.

1.7.2 If brokers were truly a layer of abstraction from the business logic, how come
we allow external exceptions to leak through them onto the services layer?

Brokers are only the the first layer of abstraction, but not the only one - the broker
neighboring services are responsible for converting the native exceptions occurring
from a broker into a more local exception model that can be handled and processed
internally within the business logic realm.

Full pure local code starts to occur on the processing, orchestration, coordination and
aggregation layers where all the exceptions, all the returned models and all operations
are localized to the system.

1.7.3 Why do we use partial classes for brokers who handle multiple entities?

Since brokers are required to own their own configurations, it made more sense to
partialize when possible to avoid reconfiguring every storage broker for each entity.

This is a feature in C# specifically as a language, but it should be possible to


implement through inheritance in other programming languages.
2 Services
2.0 Introduction

Services in general are the containers of all the business logic in any given software - they are
the core component of any system and the main component that makes one system different
from another.

Our main goal with services is that to keep them completely agnostic from specific technologies
or external dependencies.

Any business layer is more compliant with The Standard if it can be plugged into any other
dependencies and exposure technologies with the least amount of integration efforts possible.

2.0.0 Services Operations

When we say business logic, we mainly refer to three main categories of operations, which are
validation, processing and integration.

Let's talk about these categories.


2.0.0.0 Validations

Validations focus on ensuring that the incoming or outgoing data match a particular set of rules,
which can be structural, logical or external validations, in theat exact order of priority. We will
go in details about this in the upcoming sections.

2.0.0.1 Processing

Processing mainly focuses on the flow-control, mapping and computation to satisfy a business
need - the processing operations specifically is what distinguishes one service from another,
and in general one software from another.

2.0.0.2 Integration

Finally, the integration process is mainly focused on retrieving or pushing data from or to any
integrated system dependencies.

Every one of these aspects will be discussed in details in the upcoming chapter, but the main
thing that should be understood about services is that they should be built with the intent to be
pluggable and configurable so they are easily integrated with any technology from a
dependency standpoint and also be easily plugged into any exposure functionality from an API
perspective.

2.0.1 Services Types

But services have several types based on where they stand in any given architecture, they fall
under three main categories, which are: validators, orchestrators and aggregators.
2.0.1.0 Validators

Validator services are mainly the broker-neighboring services or foundation services.

These services' main responsibility is to add a validation layer on top of the existing primitive
operations such as the CRUD operations to ensure incoming and outgoing data is validated
structurally, logically and externally before sending the data in or out of the system.

2.0.1.1 Orchestrators

Orchestrator services are the core of the business logic layer, they can be processors,
orchestrators, coordinators or management services depending on the type of their
dependencies.

Orchestrator services mainly focuses on combining multiple primitive operations, or multiple


high-order business logic operations to achieve an even higher goal.

Orchestrators services are the decision makers within any architecture, they are the owners of
the flow-control in any system and they are the main component that makes one application or
software different from the other.

Orchestrator services are also meant to be built and live longer than any other type of services
in the system.

2.0.1.2 Aggregators

Aggregator services main responsibility is to tie the outcome of multiple processing,


orchestration, coordination or management services to expose one single API for any given API
controller, or UI component to interact with the rest of the system.

Aggregators are the gatekeepers of the business logic layer, they ensure the data exposure
components (like API controllers) are interacting with only one point of contact to interact with
the rest of the system.

Aggregators in general don't really care about the order in which they call the operations that is
attached to them, but sometimes it becomes a necessity to execute a particular operation, such
as creating a student record before assigning a library card to them.
We will discuss each and every type of these services in detail in the next chapters.

2.0.2 Overall Rules

There are several rules that govern the overall architecture and design of services in any system.

These rules ensure the overall readability, maintainability, configurability of the system - in that
particular order.

2.0.2.0 Do or Delegate

Every service should either do the work or delegate the work but not both.

For instance, a processing service should delegate the work of persisting data to a foundation
service and not try to do that work by itself.

2.0.2.1 Two-Three (Florance Pattern)

For Orchestrator services, their dependencies of services (not brokers) should be limited to 2 or
3 but not 1 and not 4 or more.

The dependency on one service denies the very definition of orchestration. that's because
orchestration by definition is the combination between multiple different operations from
different sources to achieve a higher order of business-logic.

This pattern violates Florance Pattern


This pattern follows the symmetry of the Pattern

The Florance pattern also ensures the balance and symmetry of the overall architecture as well.

For instance, you can't orchestrate between a foundation and a processing services, it causes a
form of unbalance in your architecture, and an uneasy disturbance in trying to combine one
unified statement with the language each service speaks based on their level and type.

The only type of services that is allowed to violate this rule are the aggregators, where the
combination and the order of services or their calls doesn't have any real impact.

We will be discussing the Florance pattern a bit further in detail in the upcoming sections of
The Standard.

2.0.2.2 Single Exposure Point

API controllers, UI components or any other form of data exposure from the system should
have one single point of contact with the business-logic layer.

For instance, an API endpoint that offer endpoints for persisting and retrieving student data,
should not have multiple integrations with multiple services, but rather one service that offers
all of these features.

Sometimes, a single orchestration, coordination or management service does not offer


everything related to a particular entity, in which case an aggregator service is necessary to
combine all of these features into one service ready to be integrated with by an exposure
technology.
2.0.2.3 Same or Primitives I/O Model

For all services, they have to maintain a single contract in terms of their return and input types,
except if they were primitives.

For instance, a service that provides any kind of operations for an entity type Student - should
not return from any of it's methods any other entity type.

You may return an aggregation of the same entity whether it's custom or native such
as List<Student> or AggregatedStudents models, or a premitive type like getting students
count, or a boolean indicating whether a student exists or not but not any other non-primitive
or non-aggregating contract.

For input parameters a similar rule applies - any service may receive an input parameter of the
same contract or a virtual aggregation contract or a primitive type but not any other contract,
that simply violates the rule.

This rule enforces the focus of any service to maintain it's responsibility on a single entity and
all it's related operations.

Once a service returns a different contract, it simply violates it's own naming convention like
a StudentOrchestrationService returning List<Teacher> - and it starts falling into the
trap of being called by other services from a completely different data pipelines.

For primitive input parameters, if they belong to a different entity model, that is not necessarily
a reference on the main entity, it begs the question to orchestrate between two processing or
foundation services to maintain a unified model without break the pure-contracting rule.

If the combination between multiple different contracts in an orchestration service is required,


then a new unified virtual model has to be the new unique contract for the orchestration service
with mappings implemented underneath on the concrete level of that service to maintain
compatibility and integration saftey.

2.0.2.4 Every Service for Itself

Every service is responsible for validating it's inputs and outputs. you should not rely on
services up or downstream to validate your data.
This is a defensive programming mechanism to ensure that in case of swapping implmentations
behind contracts, the responsibility of any given services wouldn't be affected if down or
upstream services decided to pass on their validations for any reason.

Within any monolithic, microservice or serverless architecture-based system, every service is


built with the intent that it would split off of the system at some point, and become the last point
of contact before integrating with some external resource broker.

For instance, in the following architecture, services are mapping parts of an


input Student model into a LibraryCard model, here's a visual of the models:

Student
public class Student
{
public Guid Id {get; set;}
public string Name {get; set;}
}

LibraryCard
public class LibraryCard
{
public Guid Id {get; set;}
public Guid StudentId {get; set;}
}

Now, assume that our orchestrator service StudentOrchestrationService is ensuring every


new student that gets registered will need to have a library card, so our logic may look as
follows:

public async ValueTask<Student> RegisterStudentAsync(Student student)


{
Student registeredStudent =
await this.studentProcessingService.RegisterStudentAsync(student);

await AssignStudentLibraryCardAsync(student);

return registeredStudent;
}

private async ValueTask<LibraryCard> AssignStudentLibraryCardAsync(Student


student)
{
LibraryCard studentLibraryCard = MapToLibraryCard(student);

return await
this.libraryCardProcessingService.AddLibraryCardAsync(studentLibraryCard);
}

private LibraryCard MapToLibraryCard(Student student)


{
return new LibraryCard
{
Id = Guid.NewGuid(),
StudentId = student.Id
};
}

As you can see above, a valid student id is required to ensure a successful mapping to
a LibraryCard - and since the mapping is the orchestrator's responsibility, we are required to
ensure that the input student and it's id is in good shape before proceeding with the orchestration
process.
2.1 Foundation Services (Broker-
Neighboring)
2.1.0 Introduction

Foundation services are the first point of contact between your business logic and the
brokers.

In general, the broker-neighboring services are a hybrid of business logic and an


abstraction layer for the processing operations where the higher-order business logic
happens, which we will talk about further when we start exploring the processing
services in the next section.

Broker-neighboring services main responsibility is to ensure the incoming and


outgoing data through the system is validated and vetted structurally, logically and
externally.

You can also think of broker-neighboring services as a layer of validation on top of


the primitive operations the brokers already offer.

For instance, if a storage broker is offering InsertStudentAsync(Student student) as


a method, then the broker-neighboring service will offer something as follows:

public async ValueTask<Student> AddStudentAsync(Student student)


{
ValidateStudent(student);

return await this.storageBroker.InsertStudentAsync(student);


}

This makes broker-neighboring services nothing more than an extra layer of


validation on top of the existing primitive operations brokers already offer.
2.1.1 On The Map

The broker-neighboring services reside between your brokers and the rest of your
application, on the left side higher-order business logic processing services,
orchestration, coordination, aggregation or management services may live, or just
simply a controller, a UI component or any other data exposure technology.

2.1.2 Characteristics

Foundation or Broker-Neighboring services in general have very specific


characteristics that strictly govern their development and integration.

Foundation services in general focus more on validations than anything else - simply
because that's their purpose, to ensure all incoming and outgoing data through the
system is in a good state for the system to process it saftely without any issues.

Here's the characteristics and rules that govern broker-neighboring services:

2.1.2.0 Pure-Primitive

Broker-neighboring services are not allowed to combine multiple primitive


operations to achieve a higher-order business logic operation.
For instance, broker-neighboring services cannot offer an upsert function, to
combine a Select operations with an Update or Insert operations based on the
outcome to ensure an entity exists and is up to date in any storage.

But they offer a validation and exception handling (and mapping) wrapper around
the dependency calls, here's an example:

public ValueTask<Student> AddStudentAsync(Student student) =>


TryCatch(async () =>
{
ValidateStudent(student);

return await this.storageBroker.InsertStudentAsync(student);


});

In the above method, you can see ValidateStudent function call preceded by
a TryCatch block. The TryCatch block is what I call Exception Noise Cancellation
pattern, which we will discuss soon in this very section.

But the validation function ensures each and every property in the incoming data is
validated before passing it forward to the primitive broker operation, which is
the InsertStudentAsync in this very instance.

2.1.2.1 Single Entity Integration

Services strongly ensure the single responsibility principle is implemented by not


integrating with any other entity brokers except for the one that it supports.

This rule doesn't necessarily apply to support brokers


like DateTimeBroker or LoggingBroker since they don't specifically target any
particular business entity and they are almost generic across the entire system.
For instance, a StudentService may integrate with a StorageBroker as long as it only
targets only the functionality offered by the partial class in
the StorageBroker.Students.cs file.

Foundation services should not integrate with more than one entity broker of any
kind simply because it will increase the complexity of validation and orchestration
which goes beyond the main purpose of the service which is just simply validation.
We push this responsibility further to the orchestration-type services to handle it.

2.1.2.2 Business Language

Broker-neighboring services speak primitive business language for their operations.


For instance, while a Broker may provide a method with the
name InsertStudentAsync - the equivelant of that on the service layer would
be AddStudentAsync.

In general, most of the CRUD operations shall be converted from a storage lanaugage
to a business language, and the same goes for non-storage operations such as Queues,
for instance we say PostQueueMessage but on the business layer we shall
say EnqueueMessage.

Since the CRUD operations the most common ones in every system, our mapping to
these CRUD operations would be as follows:

Brokers Services

Insert Add

Select Retrieve

Update Modify

Delete Remove
As we move forward towards higher-order business logic services, the language of
the methods beings used will lean more towards a business language rather than a
technology language as we will see in the upcoming sections.

2.1.3 Responsibilities

Broker-neighboring services play two very important roles in any system. The first
and most important role is to offer a layer of validation on top of the existing
primitive operations a broker already offers to ensure incoming and outgoing data is
valid to be processed or persisted by the system. The second role is to play the role
of a mapper of all other native models and contracts that may be needed to completed
any given operation while interfacing with a broker. Foundation services are the last
point of abstraction between the core business logic of any system and the rest of the
world, let's discuss these roles in detail.

2.1.3.0 Validation

Foundation services are required to ensure incoming and outgoing data from and to
the system are in a good state - they play the role of a gatekeeper between the system
and the outside world to ensure the data that goes through is structurally, logically
and externally valid before performing any further operations by upstream services.
The order of validations here is very intentional. Structural validations are the
cheapest of all three types. They ensure a particular attribute or piece of data in
general doesn't have a default value if it's required. the opposite of that is the logical
validations, where attributes are compared to other attributes with the same entity or
any other. Additional logical validations can also include a comparison with a
constant value like comparing a student enrollment age to be no less than 5 years of
age. Both strucural and logical validations come before the external. As we said, it's
simply because we don't want to pay the cost of communicating with an external
resource including latency tax if our request is not in a good shape first. For instance,
we shouldn't try to post some Student object to an external API if the object is null.
Or if the Student model is corrupted or invalid logically in any way, shape or form.
2.1.3.0.0 Structural Validations

Validations are three different layers. the first of these layers is the structural
validations. to ensure certain properties on any given model or a primitive type are
not in an invalid structural state.

For instance, a property of type String should not be empty, null or white space.
another example would be for an input parameter of an int type, it should not be at
it's default state which is 0 when trying to enter an age for instance.

The structural validations ensure the data is in a good shape before moving forward
with any further validations - for instance, we can't possibly validate a student has
the minimum number of characters in their names if their first name is structurally
invalid.

Structural validations play the role of identifying the required properties on any
given model, and while a lot of technologies offer the validation annotations, plugins
or libraries to globally enforce data validation rules, I choose to perform the
validation programmatically and manually to gain more control of what would be
required and what wouldn't in a TDD fashion.

The issue with some of the current implementations on structural and logical
validations on data models is that it can be very easily changed under the radar
without any unit tests firing any alarms. Check this example for instance:

public Student
{
[Required]
public string Name {get; set;}
}
The above example can be very enticing at a glance from an engineering standpoint.
All you have to do is decorate your model attribute with a magical annotation and
then all of the sudden your data is being validated.

The problem here is that this pattern combines two different responsibilities or more
together all in the same model. Models are supposed to be just a representation of
objects in reality - nothing more and nothing less. Some engineers call them anemic
models which focuses the responsbility of every single model to only represent the
attributes of the real world object it's trying to simulate without any additional details.

But the annotated models now try to inject business logic into their very definitions.
This business logic may or may not be needed across all services, brokers or exposing
components that uses them.

Structural validations on models may seem like extra work that can be avoided with
magical decorations. But in the case of trying to diverge slightly from these
validations into a more customized validations, now you will see a new anti-pattern
emerge like custom annotations that may or may not be detectable through unit tests.

Let's talk about how to test a structural validation routine:

2.1.3.0.0.0 Testing Structural Validations

Because I truly believe in the importance of TDD, I am going to start showing the
implementation of structural validations by writing a failing test for it first.

Let's assume we have a student model, with the following details:

public class Student


{
public Guid Id {get; set;}
}
We want to validate that the student Id is not a structurally invalid Id - such as an
empty Guid - therefore we would write a unit test in the following fashion:

[Fact]
public async void
ShouldThrowValidationExceptionOnRegisterWhenIdIsInvalidAndLogItAsync()
{
// given
Student randomStudent = CreateRandomStudent();
Student inputStudent = randomStudent;
inputStudent.Id = Guid.Empty;

var invalidStudentException = new InvalidStudentException(


parameterName: nameof(Student.Id),
parameterValue: inputStudent.Id);

var expectedStudentValidationException =
new StudentValidationException(invalidStudentException);

// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(inputStudent);

// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());

this.loggingBrokerMock.Verify(broker =>

broker.LogError(It.Is(SameExceptionAs(expectedStudentValidationExce
ption))),
Times.Once);

this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Never);

this.dateTimeBrokerMock.VerifyNoOtherCalls();
this.loggingBrokerMock.VerifyNoOtherCalls();
this.storageBrokerMock.VerifyNoOtherCalls();
}

In the above test, we created a random student object then assigned the an invalid Id
value of Guid.Empty to the student Id.

When the structural validation logic in our foundation service examines


the Id property, it should throw an exception property describing the issue of
validation in our student model. in this case we throw InvalidStudentException.

The exception is required to briefly describe the whats, wheres and whys of the
validation operation. in our case here the what would be the validation issue
occurring, the where would be the Student service and the why would be the property
value.

Here's how an InvalidStudentException would look like:

public class InvalidStudentException : Exception


{
public InvalidStudentException(string parameterName, object
parameterValue)
: base($"Invalid Student, " +
$"ParameterName: {parameterName}, " +
$"ParameterValue: {parameterValue}.")
{ }
}

The above unit test however, requires our InvalidStudentException to be wrapped


up in a more generic system-level exception, which is StudentValidationException -
these exceptions is what I call outer-exceptions, they encapsulate all the different
situations of validations regardless of their category and communicates the error to
upstream services or controllers so they can map that to the proper error code to the
consumer of these services.
Our StudentValidationException would be implemented as follows:

public class StudentValidationException : Exception


{
public StudentValidationException(Exception innerException)
: base("Invalid input, please check your input and then try
again.", innerException) { }
}

The message in the outer-validation above indicates that the issue is in the input, and
therefore it requires the input submitter to try again as there are no actions required
from the system side to be adjusted.

2.1.3.0.0.1 Implementing Structural Validations

Now, let's look at the other side of the validation process, which is the
implementation. Structural validations always come before each and every other type
of validations. That's simply because structural validations are the cheapest from an
execution and asymptotic time perspective. For instance, It's much cheaper to
validation an Id is invalid structurally, than sending an API call across to get the
exact same answer plus the cost of latency. this all adds up when multi-million
requests per second start flowing in. Structural and logical validations in general live
in their own partial class to run these validations, for instance, if our service is
called StudentService.cs then a new file should be created with the
name StudentService.Validations.cs to encapsulate and visually abstracts away the
validation rules to ensure clean data are coming in and going out. Here's how an Id
validation would look like:

StudentService.Validations.cs
private void ValidateStudent(Student student)
{
switch(student)
{
case {} when IsInvalid(student.Id):
throw new InvalidStudentException(
parameterName: nameof(Student.Id),
parameterValue: student.Id);
}
}

private static bool IsInvalid(Guid id) => id == Guid.Empty;

We have implemented a method to validate the entire student object, with a


compilation of all the rules we need to setup to validate structurally and logically the
student input object. The most important part to notice about the above code snippet
is to ensure the encapsulation of any finer details further away from the main goal of
a particular method.

That's the reason why we decided to implement a private static method IsInvalid to
abstract away the details of what determines a property of type Guid is invalid or not.
And as we move further in the implementation, we are going to have to implement
multiple overloads of the same method to validate other value types structurally and
logically.

The purpose of the ValidateStudent method is to simply set up the rules and take an
action by throwing an exception if any of these rules are violated. there's always an
opportunity to aggregated the violation errors rather than throwing too early at the
first sign of structural or logically validation issue to be detected.

Now, with the implementation above, we need to call that method to structurally and
logically validate our input. let's make that call in our RegisterStudentAsync method
as follows:

StudentService.cs
public ValueTask<Student> RegisterStudentAsync(Student student) =>
TryCatch(async () =>
{
ValidateStudent(student);
return await this.storageBroker.InsertStudentAsync(student);
});

At a glance, you will notice that our method here doesn't necessarily handle any type
of exceptions at the logic level. that's because all the exception noise is also
abstracted away in a method called TryCatch.

TryCatch is a concept I invented to allow engineers to focus on what matters that


most based on which aspect of the servie that are looking at without having to take
any shortcuts with the exception handling to make the code a bit more readable.

TryCatch methods in general live in another partial class, and an entirely new file
called StudentService.Exceptions.cs - which is where all exception handling and
error reporting happens as I will show you in the following example.

Let's take a look at what a TryCatch method looks like:

StudentService.Exceptions.cs
private delegate ValueTask<Student> ReturningStudentFunction();

private async ValueTask<Student> TryCatch(ReturningStudentFunction


returningStudentFunction)
{
try
{
return await returningStudentFunction();
}
catch (InvalidStudentException invalidStudentInputException)
{
throw
CreateAndLogValidationException(invalidStudentInputException);
}
}

private StudentValidationException CreateAndLogValidationException(Exception


exception)
{
var studentValidationException = new
StudentValidationException(exception);
this.loggingBroker.LogError(studentValidationException);

return studentValidationException;
}

The TryCatch exception noise-cancellation pattern beautifully takes in any function


that returns a particular type as a delegate and handles any thrown exceptions off of
that function or it's dependencies.

The main responsibility of a TryCatch function is to wrap up a service inner


exceptions with outer exceptions to ease-up the reaction of external consumers of
that service into only one of the three categories, which are Service Excetpions,
Validations Excetpions and Dependency Excetpions. there are sub-types to these
excetpions such as Dependency Validation Excetpions but these usually fall under
the Validation Excetpion category as we will discuss in upcoming sections of The
Standard.

In a TryCatch method, we can add as many inner and external excetpions as we want
and map them into local exceptions for upstream services not to have a strong
dependency on any particular libraries or external resource models, which we will
talk about in detail when we move on to the Mapping responsibility of broker-
neighboring (foundation) services.

2.1.3.0.1 Logical Validations

Logical validations are the second in order to structural validations. their main
responsibility by definition is to logically validate whether a structurally valid piece
of data is logically valid. For instance, a date of birth for a student could be
structurally valid by having a value of 1/1/1800 but logically, a student that is over
200 years of age is an impposibility.
The most common logical validations are validations for audit fields such
as CreatedBy and UpdatedBy - it's logically impossible that a new record can be
inserted with two different values for the authors of that new record - simply because
data can only be inserted by one person at a time.

Let's talk about how we can test-drive and implement logical validations:

2.1.3.0.1.0 Testing Logical Validations

In the common case of testing logical validations for audit fields, we want to throw
a validation exception that the UpdatedBy value is invalid simply because it doesn't
match the CreatedBy field.

Let's assume our Student model looks as follows:

public class Student {


Guid CreatedBy {get; set;}
Guid UpdatedBy {get; set;}
}

Our test to validate these values logically would be as follows:

[Fact]
public async Task
ShouldThrowValidationExceptionOnRegisterIfUpdatedByNotSameAsCreatedByAndLog
ItAsync()
{
// given
Student randomStudent = CreateRandomStudent();
Student inputStudent = randomStudent;
inputStudent.UpdatedBy = Guid.NewGuid();

var invalidStudentException = new InvalidStudentException(


parameterName: nameof(Student.UpdatedBy),
parameterValue: inputStudent.UpdatedBy);

var expectedStudentValidationException =
new StudentValidationException(invalidStudentException);

// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(inputStudent);

// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());

this.loggingBrokerMock.Verify(broker =>
broker.LogError(It.Is(
SameExceptionAs(expectedStudentValidationException))),
Times.Once);

this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Never);

this.loggingBrokerMock.VerifyNoOtherCalls();
this.dateTimeBrokerMock.VerifyNoOtherCalls();
this.storageBrokerMock.VerifyNoOtherCalls();
}

In the above test, we have changed the value of the UpdatedBy field to ensure it
completely differs from the CreatedBy field - now we expect
an InvalidStudentException with the CreatedBy to be the reason for this validation
exception to occur.

Let's go ahead an write an implementation for this failing test.

2.1.3.0.1.1 Implementing Logical Validations

Just like we did in the structural validations section, we are going to add more rules
to our validation switch case as follows:
StudentService.Validations.cs
private void ValidateStudent(Student student)
{
switch(student)
{
case {} when IsNotSame(student.CreatedBy, student.UpdatedBy):
throw new InvalidStudentException(
parameterName: nameof(Student.UpdatedBy),
parameterValue: student.UpdatedBy);
}
}

private static bool IsNotSame(Guid firstId, Guid secondId) => firstId !=


secondId;

Everything else in
both StudentService.cs and StudentService.Exceptions.cs continues to be exactly
the same as we've done above in the structural validations.

Logical validations exceptions, just like any other exceptions that may occur are
usually non-critical. However, it all depends on your business case to determine
whether a particular logical, structural or even a dependency validation are critical
or not, this is when you might need to create a special class of exceptions, something
like InvalidStudentCriticalException then log it accordingly.

2.1.3.0.2 Dependency Validations

The last type of validations that are usually performed by foundation services is
dependency validations. I define dependency validations as any form of validation
that requires calling an external resource to validate whether a foundation service
should proceed with processing incoming data or halt with an exception.

A good example of dependency validations is when we call a broker to retrieve a


particular entity by it's id. If the entity returned is not found, or the API broker returns
a NotFound error - the foundation service is then required to wrap that error in
a DependencyValidationException and halts all following processes.

Dependency validations can occur because you called an external resources and it
returned an error, or returned a value that warrants raising an error. For instance, an
API call might return a 404 code, and that's interpreted as an exception if the input
was supposed to correspond to an existing object.

But dependency validation exception can also occur if the returned value even if it
was a successful call did not match the expectation, such as an empty list returned
from an API call when trying to insert a new coach of a team - if there is no team,
there can be no coach for instance. The foundation service in this case will be
required to raise a local exception to explain the issue, something
like NoTeamMembersFoundException or something of that nature.

A more common example is when a particular input entity is using the same id as an
existing entity in the system. In a relational database world, a duplicate key exception
would be thrown. In a RESTful API scneario, programmatically applying the same
concept also achieves the same goal for API validations assuming the granularity of
the system being called weaken the referencial integrity of the overall system data.

There are situations where the faulty response can be expressed in a fashion other
than exceptions, but we shall touch on that topic in a more advanced chapters of this
Standard.

Let's write a failing test to verify whether we are throwing


a DependencyValidationException if Student model already existes in the storage
with the storage broker throwing a DuplicateKeyException as a native result of the
operation.

2.1.3.0.2.0 Testing Dependency Validations

Let's assume our student model uses an Id with the type Guid as follows:
public class Student
{
public Guid Id {get; set;}
public string Name {get; set;}
}

our unit test to validate a DependencyValidation exception would be thrown in


a DuplicateKey situation would be as follows:

[Fact]
public async void
ShouldThrowDependencyValidationExceptionOnRegisterIfStudentAlreadyExistsAnd
LogItAsync()
{
// given
Student someStudent = CreateRandomStudent();
string randomMessage = GetRandomMessage();
string exceptionMessage = randomMessage;
var duplicateKeyException = new
DuplicateKeyException(exceptionMessage);

var alreadyExistsStudentException =
new AlreadyExistsStudentException(duplicateKeyException);

var expectedStudentValidationException =
new
StudentDependencyValidationException(alreadyExistsStudentException);

this.storageBrokerMock.Setup(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()))
.ThrowsAsync(duplicateKeyException);

// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(someStudent);

// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());
this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Once);

this.loggingBrokerMock.Verify(broker =>
broker.LogError(It.Is(

SameExceptionAs(expectedStudentValidationException))),
Times.Once);

this.storageBrokerMock.VerifyNoOtherCalls();
this.loggingBrokerMock.VerifyNoOtherCalls();
}

In the above test, we validate that we wrap a native DuplicateKeyException in a local


model tailored to the specific model case which is
the AlreadyExcistsStudentException in our example here. then we wrap that again
with a generic category exception model which is
the StudentDependencyValidationException.

There's a couple of rules that govern the construction of dependency validations,


which are as follows:

 Rule 1: If a dependency validation is handling another dependency validation


from a downstream service, then the inner exception of the downstream
exception should be the same for the dependency validation at the current
level.

In other words, if some StudentService is throwing


a StudentDepdendencyValidationException to an upstream service such
as StudentProcessingService - then it's important that
the StudentProcessingDependencyValidationException contain the same inner
exception as the StudentDependencyValidationException. That's because once these
exception are mapped into exposure components, such as API controller or UI
components, the original validation message needs to propagate through the system
and be presented to the end user no matter where it originated from.

Additionally, maintaining the original inner exception guarantees the ability to


communicate different error messages through API endpoints. For
instnace, AlreadyExistsStudentException can be communicated
as Conflict or 409 on an API controller level - this differs from another dependency
validation exception such as InvalidStudentReferenceException which would be
communicated as FailedDependency error or 424.

 Rule 2: If a dependency validation exception is handling a non-dependency


validation exception it should take that exception as it's inner exception and
not anything else.

This rules ensures that only the local validation exception is what's being propagated
not it's native exception from a storage system or an API or any other external
dependency.

Which is the case that we have here with our AlreadyExistsStudentException and
it's StudentDependencyValidationException - the native exception is completely
hidden away from sight, and the mapping of that native exception and it's inner
message is what's being communicated to the end user. This gives the engineers the
power to control what's being communicated from the other end of their system
instead of letting the native message (which is subject to change) propagate to the
end-users.

2.1.3.0.2.1 Implementing Dependency Validations

Depending on where the validation error originates from, the implementation of


dependency validations may or may not contain any business logic. As we previously
mentioned, if the error is originating from the external resource (which is the case
here) - then all we have to do is just wrap that error in a local exception then
categorize it with an external exception under dependency validation.

To ensure the aforementioned test passed, we are going to need few models. For
the AlreadyExistsStudentException the implementation would be as follows:

public class AlreadyExistsStudentException : Exception


{
public AlreadyExistsStudentException(Exception innerException)
: base($"Student with the same Id already exists",
innerException){ }
}

We also need the StudentDependencyValidationException which should be as


follows:

public class StudentDependencyValidationException : Exception


{
public StudentDependencyValidationException(Exception
innerException)
: base($"Student dependency validation error occurred, please
try again.", innerException){ }
}

Now, let's go to the implementation side, let's start with the exception handling logic:

StudentService.Exceptions.cs
private delegate ValueTask<Student> ReturningStudentFunction();

private async ValueTask<Student> TryCatch(ReturningStudentFunction


returningStudentFunction)
{
try
{
return await returningStudentFunction();
}
...
catch (DuplicateKeyException duplicateKeyException)
{
var alreadyExistsStudentException = new
AlreadyExistsStudentException(duplicateKeyException);
throw
CreateAndLogDependencyValidationException(alreadyExistsStudentException);
}
}

...

private StudentDependencyValidationException
CreateAndLogDependencyValidationException(Exception exception)
{
var studentDependencyValidationException = new
StudentDependencyValidationException(exception);
this.loggingBroker.LogError(studentDependencyValidationException);

return studentDependencyValidationException;
}

We created the local inner exception in the catch block of our exception handling
process to allow the reusability of our dependency validation exception method for
other situations that require that same level of external exceptions.

Everything else stays the same for the referecing of the TryCatch method in
the StudentService.cs file.

2.1.3.1 Mapping

The second responsibility for a foundation service is to play the role of a mapper both
ways between local models and non-local models. For instance, if you are leveraging
an email service that provides it's own SDKs to integrate with, and your brokers are
already wrapping and exposing the APIs for that service. your foundation service is
required to map the inputs and outputs of the broker methods into local models. the
same situation and more commonly between native non-local exceptions such as the
ones we mentioned above with the dependency validation situation, the same aspect
applies to just dependency errors or service errors as we will discuss shortly.

2.1.3.1.0 Non-Local Models

Its very common for modern applications to require integration at some point with
external services. these services can be local to the overall architecture or distributed
system where the application lives, or it can be a 3rd party provider such as some of
the popular email services for instance. External services providers invest a lot of
effort in developing fluent APIs, SDKs and libraries in every common programming
language to make it easy for the engineers to integrate their applications with that
3rd party service. For instance, let's assume a third party email service provider is
offering the following API through their SDKs:

public interface IEmailServiceProvider


{
ValueTask<EmailMessage> SendEmailAsync(EmailMessage message);
}

Let's consider the model EmailMessage is a native model, it comes with the email
service provider SDK. your brokers might offer a wrapper around this API by
building a contract to abstract away the functionality but can't do much with the
native models that are passed in or returned out of these functionality. therefore our
brokers interface would look something like this:

public interface IEmailBroker


{
ValueTask<EmailMessage> SendEmailMessageAsync(EmailMessage message);
}

Then the implementation would something like this:


public class EmailBroker : IEmailBroker
{
public async ValueTask<EmailMessage>
SendEmailMessageAsync(EmailMessage message) =>
await this.emailServiceProvider.SendEmailAsync(message);
}

As we said before, the brokers here have done their part of abstraction by pushing
away the actual implementation and the dependencies of the
native EmailServiceProvider away from our foundation serviecs. But that's only
50% of the job, the abstraction isn't quite fully complete yet until there are no tracks
of the native EmailMessage model. This is where the foundation services come in to
do a test-driven operation of mapping between the native non-local models and your
application's local models. therefore its very possible to see a mapping function in a
foundation service to abstract away the native model from the rest of your business
layer services.

Your foundation service then will be required to support a new local model, let's call
it Email. your local model's property may be identical to the external
model EmailMessage - especially on a primitive data type level. But the new model
would be the one and only contract between your pure business logic layer
(processing, orchestration, coordination and management services) and your hybrid
logic layer like the foundation services. Here's a code snippet for this operation:

public async ValueTask<Email> SendEmailMessageAsync(Email email)


{
EmailMessage inputEmailMessage = MapToEmailMessage(email);
EmailMessage sentEmailMessage = await
this.emailBroker.SendEmailMessageAsync(inputEmailMessage);

return MapToEmail(sentEmailMessage);
}
Depending on whether the returned message has a status or you would like to return
the input message as a sign of a successful operation, both practices are valid in my
Standard. It all depends on what makes more sense to the operation you are trying to
execute. the code snippet above is an ideal scenario where your code will try to stay
true to the value passed in as well as the value returned with all the necessary
mapping included.

2.1.3.1.1 Exceptions Mappings

Just like the non-local models, exceptions that are either produced by the external
API like the EntityFramework models DbUpdateException or any other has to be
mapped into local exception models. Handling these non-local exceptions that early
before entering the pure-business layer components will prevent any potential tight
coupling or dependency on any external model. as it may be very common, that
exceptions can be handled differently based on the type of exception and how we
want to deal with it internally in the system. For instance, if we are trying to handle
a UserNotFoundException being thrown from using Microsoft Graph for instance, we
might not necessarily want to exist the entire procedure. we might want to continue
by adding a user in some other storage for future Graph submittal processing.
External APIs should not influence whether your internal operation should halt or
not. and therefore handling exceptions on the Foundation layer is the guarantee that
this influence is limited within the borders of our external resources handling area of
our application and has no impact whatsoever on our core business processes. The
following illustration should draw the picture a bit clearer from that perspective:
Here's some common scenarios for mapping native or inner local exceptions to outer
exceptions:

Log
Exception Wrap Inner Exception With Wrap With
Level
NullStudentException - StudentValidationException Error
InvalidStudentException - StudentValidationException Error
SqlException - StudentDependencyException Critical
NotFoundStudentException - StudentValidationException Error
DuplicateKeyException AlreadyExistsStudentException StudentDependencyValidationException Error
ForeignKeyConstraintConflictException InvalidStudnetReferenceException StudentDependencyValidationException Error
DbUpdateConcurrencyException LockedStudentException StudentDependencyValidationException Error
DbUpdateException - StudentDependencyException Error
Exception - StudentServiceException Error
2.2 Processing Services (Higher-Order
Business Logic)
2.2.0 Introduction

Processing services are the layer where a higher order of business logic is
implemented. they may combine (or orchestrate) two primitive-level functions from
their corresponding foundation service to introduce a newer functionality. they may
also call one primitive function and change the outcome with a little bit of added
business logic. and sometimes processing services are there as a pass-through to
introduce balance to the overall architecture.

Processing services are optional, depending on your business need - in a simple


CRUD operations API, processing services and all the other categories of services
beyond that point will sieze to exist as there is no need for a higher order of business
logic at that point.

Here's an example of what a Processing service function would look like:

public ValueTask<Student> UpsertStudentAsync(Student student) =>


TryCatch(async () =>
{
ValidateStudent(student);

IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();

bool isStudentExists = allStudents.Any(retrievedStudent =>


retrievedStudent.Id == student.Id);

return isStudentExsits switch {


false => await this.studentService.RegisterStudentAsync(student),
_ => await this.studentService.ModifyStudentAsync(student.Id)
};
});

Processing services make Foundation services nothing but a layer of validation on


top of the existing primitive operations. Which means that Processing services
functions are beyond primitive, and they only deal with local models as we will
discuss in the upcoming sections.

2.2.1 On The Map

When used, Processing services live between foundation services and the rest of the
application. they may not call Entity or Business brokers, but they may call Utility
brokers such as logging brokers, time brokers and any other brokers that offer
supporting functionality and not specific to any particular business logic. here's a
visual of where processing services are located on the map of our architecture:

On the right side of a Processing service lies all the non-local models and
functionality, whether it's through the brokers, or the models that the foundation
service is trying to map into local models. On the left side of Processing services is
pure local functionality, models and architecture. Starting from the Processing
services themselves, there should be no trace or track of any native or non-local
models in the system.
2.2.2 Charachteristics

Processing services in general are combiners of multiple primitive-level functions to


produce a higher-order business logic. but they have much more charactristics than
just that, let's talk about those here.

2.2.2.0 Language

The language used in processing services define the level of complexity and the
capabilities it offers. Usually, processing services combine two or more primitive
operations from the foundation layer to create a new value.

2.2.2.0.0 Functions Language

At a glance, Processing services language change from primitive operations such


as AddStudent or RemoveStudent to EnsureStudentExists or UpsertStudent. they
usually offer a more advanced business-logic operations to support a higher order
functionality. Here's some examples for the most common combinations a processing
service may offer:

Processing Operation Primitive Functions

EnsureStudentExistsAsync RetrieveAllStudents + AddStudentAsync

RetrieveStudentById + AddStudentAsync +
UpsertStudentAsync
ModifyStudentAsync

VerifyStudentExists RetrieveAllStudents

TryRemoveStudentAsync RetrieveStudentById + RemoveStudentByIdAsync

As you can see, the combination of primitive functions processing services do might
also include adding an additional layer of logic on top of the existing primitive
operation. For instance, VerifyStudentExists takes advantage of
the RetrieveAllStudents primitive function, and then adds a boolean logic to verify
the returned student by and Id from a query actually exists or not before returning
a boolean.

2.2.2.0.1 Pass-Through

Processing services may borrow some of the terminology a foundation service may
use. for instance, in a pass-through scenario, a processing service with be as simple
as AddStudentAsync. we will discuss the architecture-balancing scenarios later in this
chapter. Unlike Foundation services, Processing services are required to have the
identifer Processing in their names. for instance, we say StudentProcessingService.

2.2.2.0.2 Class-Level Language

More importantly Processing services must include the name of the entity that is
supported by their corresponding Foundation service. For instance, if a Processing
service is dependant on a TeacherService, then the Processing service name must
be TeacherProcessingService.

2.2.2.1 Dependencies

Processing services can only have two types of dependencies. a corresponding


Foundation service, or a Utility broker. that's simply because Processing services are
nothing but an extra higher-order level of business logic, orchestrated by combined
primitive operations on the Foundation level. Processing services can also use Utility
brokers such as TimeBroker or LoggingBroker to support it's reporting aspect. but it
shall never interact with an Entity or Business broker.

2.2.2.2 One-Foundation

Processing services can interact with one and only one Foundation service. In fact
without a foundation service there can never be a Processing layer. and just like we
mentioned above about the language and naming, Processing services take on the
exact same entity name as their Foundation dependency. For instance, a processing
service that handles higher-order business-logic for students will communicate with
nothing but its foundation layer, which would be StudentService for instance. That
means that processing services will have one and only one service as a dependency
in its construction or initiation as follows:

public class StudentProcessingService


{
private readonly IStudentService studentService;

public StudentProcessingService(IStudentService studentService) =>


this.studentService = studentService;
}

However, processing services may require dependencies on multiple utility brokers


such as DateTimeBroker or LoggingBroker ... etc.

2.2.2.3 Used-Data-Only Validations

Unlike the Foundation layer services, Processing services only validate what it needs
from it's input. For instance, if a Processing service is required to validate a student
entity exists, and it's input model just happens to be an entire Student entity, it will
only validate that the entity is not null and that the Id of that entity is valid. the rest
of the entity is out of the concern of the Processing service. Processing services
delegate full validations to the layer of services that is concerned with that which is
the Foundation layer. here's an example:

public ValueTask<Student> UpsertStudentAsync(Student student) =>


TryCatch(async () =>
{
ValidateStudent(student);

IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();
bool isStudentExists = allStudents.Any(retrievedStudent =>
retrievedStudent.Id == student.Id);

return isStudentExsits switch {


false => await this.studentService.RegisterStudentAsync(student),
_ => await this.studentService.ModifyStudentAsync(student.Id)
};
});

Processing services are also not very concerned about outgoing validations except
for what it's going to use within the same routine. For instance, if a Processing service
is retrieving a model, and it's going to use this model to be passed to another
primitive-level function on the Foundation layer, the Processing service will be
required to validate that the retrieved model is valid depending on which attributes
of the model it uses. For Pass-through scenarios however, processing services will
delegate the outgoing validation to the foundation layer.

2.2.3 Responsibilities

Processing services main responsibility is to provide higher-order business logic.


This happens along with the regular signature mapping and various use-only
validations which we will discuss in detail in this section.

2.2.3.0 Higher-Order Logic

Higher-order business logic are functions that are above primitive. For
instance, AddStudentAsync function is a primitive function that does one thing and
one thing only. But higher-order logic is when we try to provide a function that
changes the outcome of a single primitive function like VerifyStudentExists which
returns a boolean value instead of the entire object of the Student, or a combination
of multiple primitive functions such as EnsureStudentExistsAsync which is a
function that will only add a given Student model if and only if the aforementioned
object doesn't already exist in storage. here's some examples:
2.2.3.0.0 Shifters

The shifter pattern in a higher order business logic is when the outcome of a particular
primitive function is changed from one value to another. ideally a primitive type such
as a bool or int not a completely different type as that would violate the purity
principle. For instance, in a shifter pattern, we want to verify if a student exists or
not. We don't really want the entire object, but just whether it exists in a particular
system or not. Now, this seems like a case where we only need to interact with one
and only one foundation service and we are shifting the value of the outcome to
something else. Which should fit perfectly in the realm of the processing services.
Here's an example:

public ValueTask<bool> VerifyStudentExsits(Guid studentId) =>


TryCatch(async () =>
{
IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();

ValidateStudents(allStudents);

return allStudents.Any(student => student.Id == studentId);


});

In the snippet above, we provided a higher order business logic, by returning a


boolean value of whether a particular student with a given Id exists on the system or
not. There are cases where your orchestration layer of services isn't really concerned
with all the details of a particular entity but just knowing whether it exists or not as
a part of an uber business logic or what we call orchestration.

Here's another popular example for processing services shifting pattern:

public int RetrieveStudentsCount() =>


TryCatch(() =>
{
IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();

ValidateStudents(allStudents);

return allStudents.Count();
});

In the example above, we provided a function to retrieve the count of all students in
a given system. Its up to the designers of the system to determine whether to interpret
a null value retrieved for all students to be an exception case that was not expected
to happen or return a 0 all depending on how they manage the outcome. In our case
here we validate the outgoing data as much as the incoming, especially if its going
to be used within the processing function to ensure further failures do not occur for
upstream services.

2.2.3.0.1 Combinations

The combination of multiple primitive functions from the foundation layer to achieve
a higher-order business logic is one of the main responsiblities of a processing
service. As we mentioned before, some of the most popular examples is for ensuring
a particular student model exists as follows:

public async ValueTask<Student> EnsureStudentExistsAsync(Student student) =>


TryCatch(async () =>
{
ValidateStudent(student);

IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();

Student maybeStudent = allStudents.FirstOrDefault(retrievedStudent =>


retrievedStudent.Id == student.Id);

return maybeStudent switch


{
{} => maybeStudent,
_ => await this.studentService.AddStudentAsync(student)
};
});

In the code snippet above, we combined RetrieveAll with AddAsync to achieve a


higher-order business logic operation. The EnsureAsync operation which needs to
verify something or entity exists first before trying to persist it. The terminology
around these higher-order business logic routines is very important. Its important lies
mainly in controling the expectations of the outcome and the inner functionality. But
it also ensures less cognitive resources from the engineers are required to understand
the underlying capabilities of a particular routine. The conventional language used
in all of these services also ensures that redundant capability will not be created
mistakingly. For instnace, an engineering team without any form of standard might
create TryAddStudentAsync while already having an existing functionlity such
as EnsureStudentExistsAsync which does exactly the same thing. The convention
here with the limitation of the size of capabilities a particular service may have
ensures redundant work shall never occur in any occassion. There are so many
different instances of combinations that can produce a higher-order business logic,
for instance we may need to implement a functionlity that ensure a student is
removed. We use EnsureStudentRemovedByIdAsync to combine a RetrieveById and
a RemoveById in the same routine. It all depends on what level of capabilities an
upstream service may need to implement such a functionality.

2.2.3.1 Signature Mapping

Althrough processing services operate fully on local models and local contracts.
They are still required to map foundation-level services' models to their own local
models. For instance, if a foundation service is
throwing StudentValidationException then processing services will map that
exception to StudentProcessingDependencyValidationException. Let's talk about
mapping in this section.

2.2.3.1.0 Non-Exception Local Models

In general, processing services are required to map any incoming or outgoing objects
with a specific model to its own. But that rule doesn't always apply to non-exception
models. For instance, if a StudentProcessingService is operating based on
a Student model, and there's no need for a special model for this service, then the
processing service may be permitted to use the exact same model from the foundation
layer.

2.2.3.1.1 Exception Models

When it comes to processing services handling exceptions from the foundation layer,
it is important to understand that exceptions in our Standard are more expressive in
their naming conventions and their role more than any other model. Exceptions here
define the what, where and why every single time they are thrown. For instance, an
exception that's called StudentProcessingServiceException indicates the entity of
the exception which is the Student entity. Then it indicates the location of the
excetpion whic is the StudentProcessing service. And lastly it indicates the reason
for that excetpion which is ServiceExcetpion indicating an internal error to the
service that is not a validation or a dependency of nature that happened. Just like the
foundation layer, processing services will do the following mapping to occurring
exceptions from its dependencies:

Log
Exception Wrap Inner Exception With Wrap With
Level
StudentDependencyValidationException Any inner exception StudentProcessingDependencyValidationException Error
StudentValidationException Any inner exception StudentProcessingDependencyValidationException Error
StudentDependencyException - StudentProcessingDependencyException Error
StudentServiceException _ StudentProcessingDependencyException Error
Exception _ StudentProcessingServiceException Error
2.3 Orchestration Services (Complex Higher
Order Logic)
2.3.0 Introduction

Orchestration services are the combinators between multiple foundation or processing services
to perform a complex logical operation. Orchestrations main responsibility is to do a multi-
entity logical operation and delegate the dependencies of said operation to downstream
processing or foundation services. Orchestration services main responsibility is to encapsulate
operations that require two or three business entities.

public async ValueTask<LibraryCard>


CreateStudentLibraryCardAsync(LibraryCard libraryCard) =>
TryCatch(async () =>
{
ValidateLibraryCard(libraryCard);

await this.studentProcessingService
.VerifyEnrolledStudentExistsAsync(studentId:
libraryCard.StudentId);

return await
this.libraryCardProcessingService.CreateLibraryCardAsync(libraryCard);
});

In the above example, the LibraryCardOrchestrationService calls both


the StudentProcessingService and LibraryCardProcessingService to perform a
complex operation. First, verify the student we are creating a library card for does exist and that
they are enrolled. Then second create the library card.

The operation of creating a library card for any given student cannot be performed by simply
calling the library card service. That's because the library card service (processing or
foundation) does not have access to all the details about the student. Therefore a combination
logic here needed to be implemented to ensure a proper flow is in place.
Its important to understand that orchestration services are only required if and only if we need
to combine multiple entities operations primitive or higher-order. In some architectures,
orchestration services might not even exist. That's simply because some microservices might
be simply responsible for applying validation logic and persisting and retrieving data from
storage, no more or no less.

2.3.1 On The Map

Orchestration services are one of the core business logic components in any system. They are
positioned between single entity services (such as processing or foundation) and advanced logic
services such as coordination services, aggregation services or just simply exposers such as
controllers, web components or anything else. Here's a high level overview of where
orchestration services may live:

As shown above, orchestration services have quite a few dependencies and consumers. They
are the core engine of any software. On the right hand side, you can see the dependencies an
orchestration service my have. Since a processing service is optional based on whether a higher-
order business logic is needed or not - orchestration services can combine multiple foundation
services as well.

The existence of an orchestration service warrants the existence of a processing service. But
thats not always the case. There are situations where all orchestration services need to finalize
a business flow is to interact with primitive-level functionality.
From a consumer standpoint however, orchestration service could have several consumers.
These consumers could range from coordination services (orchestrators of orchestrators) or
aggregation services or simply an exposer. Exposers are like controllers, view service, UI
components or simply another foundation or processing service in case of putting messages
back on a queue - which we will discuss further in our Standard.

2.3.2 Characteristics

In general, orchestration services are concerned with combining single-entity primitive or


higher-order business logic operations to execute a successful flow. But you can also think of
them as the glue that ties multiple single-entity operations together.

2.3.2.0 Language

Just like processing services, the language used in orchestration services define the level of
complexity and the capabilities it offers. Usually, orchestration services combine two or more
primitive or higher-order operations from multiple single-entity services to execute a successful
operation.

2.3.2.0.0 Functions Language

Orchestration services have a very common charactristic when it comes to the language of it's
functions. Orchestration services are wholistic in most of the language of its function, you will
see functions such as NotifyAllAdmins where the service pulls all users with admin type and
then calls a notification service to notify each and every one of them.

It becomes very obvious that orchestration services offer functionality that inches closer and
closer to a business language than perimitive technical operation. You may see almost an
identical expression in a non-technical business requirement that matches one for one a function
name in an orchestration service. The same pattern continues as one goes to higher and more
advanced categories of services within that realm of business logic.

2.3.2.0.1 Pass-Through

Orchestration services can also be a pass-through for some operations. For instnace, an
orchestration service could allow an AddStudentAsync to be propagated through the service to
unify the source of interactions with the system at the exposers level. In which case,
orchestration services will use the very same terminology a processing or a foundation service
may use to propagate the operation.

2.3.2.0.2 Class-Level Language

Orchestration services mainly combine multiple operations to support a particular entity. So, if
the main entity of support is Student and the rest of the entities are just to support an operation
mainly targetting a Student entity - then the name of the orchestration service would
be StudentOrchestrationService.

This level of enforcement of naming conventions ensures that any orchestration service is
staying focused on a single entity responsibility with respect to multiple other supporting
entities.

For instnace, If creating a library card requires ensuring the student referenced in that library
card must be enrolled in a school. Then an orchestration service will then be named after it's
main entity which is LibraryCard in this case. Our orchestration service name then would
be LibraryCardOrchestrationService.

The opposite is also true. If enrolling a student in a school has an accompanying opeations such
as creating a library card, then in this case a StudentOrchestrationService must be created
with the main purpose to create a Student and then all other related entities once the
aforementioned succeeds.

The same idea applies to all exceptions created in an orchestration service such
as StudentOrchestrationValidationException and StudentOrchestrationDependency
Exception and so on.

2.3.2.1 Dependencies

As we mentioned above, orchestration services might have a bit larger range of types of
dependencies unlike processing and foundation services. This is only due the optionality of
processing services. Therefore, orchestration services may have dependencies that range from
foundation services or processing services and optionally and usually logging or any other
utility brokers.
2.3.2.1.0 Dependency Balance (Florance Pattern)

There's a very important rule that govern the consistency and balance of orchestration services
which is the Florance Pattern. the rule dictates that any orchestration service may not combine
dependencies from difference categories of operation.

What that means, is that an orchestration service cannot have a foundation and a processing
services combined together. The dependencies has to be either all processings or all
foundations. That rule doesn't apply to utility brokers dependencies however.

Here's an example of an unbalanced orchestration service dependencies:

An additional processing service is required to give a pass-through to a lower-level foundation


service to balance the architecture - applying Florance Pattern for symmetry would turn our
architecture to the following:
Applying Florance Pattern might be very costly at the beginning as it includes creating an
entirely new processing service (or multiple) just to balance the architecture. But its benefits
outweighs the effort spent from a maintainability, readability and pluggability perspectives.

2.3.2.1.1 Two-Three

The Two-Three rule is a complexity control rule. This rule dictates that an orchestration service
may not have more than three or less than two processing or foundation services to run the
orchestration. This rule, however, doesn't apply to utility brokers. And orchestration service
may have a DateTimeBroker or a LoggingBroker without any issues. But an orchestration
service may not have an entity broker, such as a StorageBroker or a QueueBroker which
feeds directly into the core business layer of any service.

The Two-Three rule may require a layer of normalization to the categorical business function
that is required to be accomplished. Let's talk about the different mechanisms of normalizing
orchestration services.

2.3.2.1.1.0 Full-Normalization

Often times, there are situations where the current architecture of any given orchestration
service ends up with one orchestration service that has three dependencies. And a new entity
processing or foundation service is required to complete an existing process.
For instance, let's say we have a StudentContactOrchestrationService and that service has
dependencies that provide primitive-level functionality for Address, Email and Phone for each
student. Here's a visualization of that state:

Now, a new requirment comes in to add a student SocialMedia to gather more contact
information about how to reach a certain student. We can go into full-normalization mode
simply by finding the common ground that equally splits the contact information entities. For
instance, we can split regular contact information versus digital contact information as
in Address and Phone versus Email and SocialMedia. this way we split four dependencies
into two each for their own orchestration services as follows:
As you can see in the figure above, we modified the
existing StudentContactOrchestrationService into StudentRegularContactOrchestra
tionService, then we removed one of it's dependencies on the EmailService.

Additionally, we created a new StudentDigitalContactOrchestrationService to have two


dependencies on the existing EmailService in addition to the new SocialMediaService.But
Now that the normalization is over, we need an advanced order business logic layer, like a
coordination service to provide student contact information to upstream consumers.

2.3.2.1.1.1 Semi-Normalization

Normalization isn't always as straightforward as the example above. Especially in situations


where a core entity has to exist before creating or writing in additional information towards
related entities to that very entity.

For instance, Let's say we have a StudentRegistrationOrchestrationService which relies


on StudentProcessingService, LibraryCardProcessingService and BookProcessingSe
rvice as follows:
But now, we need a new service to handle students immunization records
as ImmunizationProcessingService. We need all four services but we already have
a StudentRegistrationOrchestrationService that has three dependencies. At this point a
semi-normalization is required for the re-balancing of the architecture to honor the Two-Three
rule and eventually to control the complexity.
In this case here, a further normalization or a split is required to re-balance the architecture. We
need to think conceptually about a common ground between the primitive entities in a student
registration process. A student requirements contain identity, health and materials. We can, in
this scenario combine LibraryCard and Book under the same orchestration service as books
and libraries are somewhat related. So we have StudentLibraryOrchestrationService and
for the other service we would have StudentHealthOrchestrationService. as follows:
To complete the registration flow with a new model, a coordination service is required to pass-
in an advanced business logic to combine all of these entities together. But more importantly,
you will notice that each orchestration service now has a redundant dependency
of StudentProcessingService to ensure no virtual dependency on any other orchestration
service create or ensuring a student record exists.

Virtual dependencies are very tricky. it's a hidden connection between two services of any
category where one service implicitly assumes that a particular entity will be created and
present. Virtual dependencies are very dangerous and threaten the true autonomy of any service.
Detecting virutal dependencies at early stage in the design and development process could be a
daunting but necessary task to ensure a clean Standardized architecture is in place.
Just like model changes as it may require migrations, and additional logic and validations, a
new requirement for an entirely new additional entity might require a restructuring of an
existing architecture or extending it to a new version, depending in which stage the system is
receiving these new requirements.

It may be very enticing to just add an additional dependency to an existing orchestration service
- but that's where the system starts to diverge from The Standard. And that's when the system
starts off the road of being an unmaintainable legacy system. But more importantly, it's when
the engineers envovled in designing and developing the system are challenged against their
very principles and craftmanship.

2.3.2.1.1.2 No-Normalization

There are scenarios where any level of normalization is a challenge to achieve. While I believe
that everything, everywhere, somehow is connected. Sometimes it might be incomprehensible
for the mind to group multiple services together under one orchestration service.

Because it's quite hard for my mind to come up with an example for multiple entities that have
no connection to each other, as I truly believe it couldn't exist. I'm going to rely on some
fictional entities to visualize a problem. So let's assume there
are AService and BService orchestrated together with an XService. The existence
of XService is important to ensure that both A and B can be created with an ensurance that a
core entity X does exist.

Now, let's say a new service CService is required to be added to the mix to complete the
existing flow. So, now we have four different dependencies under one orchestration service,
and a split is mandatory. Since there's no relationship whatsoever between A, B and C, a No-
Normalization approach becomes the only option to realize a new design as follows:
Each one of the above primitive services will be orchestrated with a core service X then gathered
again under a coordination service. This case above is the worst case scenario, where a
normalization of any size is impossible. Note, that the author of this Standard couldn't come up
with a realistic example unlike any others to show you how rare it is to run into that situation,
so let's a No-Normalization approach be your very last solution if you truly run out of options.
The Software Engineering Manifesto
These are some principles that I’ve lived by and learned working with the best engineers in
the best business around the world, and from building a one client to a billion users systems
from startups to big corporate, here’s my software engineering manifesto:

1. Don’t jump to the first solution you’re familiar with, or the one that just “works” – try to
find a modern way to solve older problems, this way you grow and solve problems at
the same time.
2. Don’t ever think there’s such a thing as perfect software, there’s no such a thing.
There are, however, relatively satisfying experiences that slowly decline as
technology advances and new tools are implemented for better experiences, and
your responsibility as an engineer is to keep your software up to date to provide
better experiences and higher values.
3. Your software should work like an appliance, plug and play not just for the end users,
but also for the engineers that will maintain your software, they should just be able to
pull in your code, build it and run!
4. Write every line of code as if this is the last time you touch that code, because in a
lot of projects, it’s very likely that you won’t get the chance to come back and rewrite
your code.
5. Know when to become a mad scientist and when to become a cowboy, not every
problem requires full optimization.
6. Building software for a client you don’t know or never met is like tailoring a suit for
someone you’ve never seen, it doesn’t work! get to know your clients on a personal
level, understand what they really need their product to be not what you think it
should be.
7. Documentation was meant for describing the purpose of your code and what it does,
undocumented code is like a spaceship without a manual, don’t blame others for not
following your patterns, breaking your code or rewriting the whole system if you don’t
document your code.
8. Take pride and ownership of your product, don’t wait for someone to come and tell
you how to make it better, that should be something your regularly do to keep your
software up to date with the best technologies and practices.
9. Writing software is an opportunity for you to show others how your brain works, don’t
come off lazy or stupid when it’s not who you really are.
10. Software engineering is a lifestyle to take on existing problems in life more
intelligently and efficiently, it’s not a job, a gig or a craft, it’s a lifestyle so if you’ve
taken that path, own up to it.
11. True engineers aren’t just good at integrating distributed systems and building APIs,
true engineers are also good at integrating with like-minded engineers and efficient at
utilizing everyone’s intelligence to build better software.
12. Pair programming is a dance between two highly intelligent individuals, make your
pair feel like they are dancing with a star, because you are!
13. Engineering isn’t just about building software, it’s about intelligently taking on
requirements for new projects, managing time, priorities and building intelligent
relationships with your teammates and even better communications with external
teams.
14. Continuously learn, listen, communicate and integrate, don’t be an occasional
learner, be a consistent day-by-day learner, evolving your skills should be just as
important as your basic needs.
15. Software engineering is an opportunity for humanity to solve its most complex
problems to make a better world for all of us, we can’t get there if we don’t
continuously adapt to changes, be agile and continuously learn, the industry is in a
continuous state of inexperience, push more to teach others, learn from others and
be the best for everyone around you.
16. It doesn’t matter how small or big the project you’re working on is, every project is an
opportunity for you to evolve, learn and modernize, even if you were building a
calculator just for fun!
17. Don’t ask for permission to get something right, just get it right.
18. Don’t try to solve problems with one hand tied behind your back, ask yourself what is
it that you’re not seeing, see the full spectrum, the problem isn’t always in the code
that’s in front of your eyes, it could be the requirement itself or the description of it,
don’t be a robot, visualize the entire process and think of the person using your
system as well as the person that’s going to maintain it.
19. As Reed Hastings says: don’t tolerate brilliant jerks, the cost to the team is much
larger than the profit of the product.
20. As Robert Martin says: “leave the code better than you found it.” And I say leave the
team, product and software and the business better than you found it, that’s just what
engineers do, make things smarter, faster and better!
21. Take second opinions on your work, ask around with an open mind and heart, learn
from others, communicate and be respectful, two minds are better than one, the
more the merrier!
22. The best mind to solve problems is the mind of the person who doesn’t have issues
with anyone, transparent and communicative about how they feel and what they
think, don’t bottle things up, don’t hold grudges, don’t hate or envy, these are all
things that’ll cloud your mind from seeing the best solutions to the hardest problems
and from being a better engineer.
23. As the agile manifesto dictates, people over process, and as others say people over
profits, a better team with better relationships will eventually produce better software
and reflect better practices, work towards engineering better team relationships and
stronger appetite for learning and growing together, don’t try to best the best in your
team, but rather the best for your team.
24. Darwin says: “survival isn’t for the strongest or the smartest, but for those who are
most able to adapt”, in a forever changing industry like ours adaptability should be
your top priority, there’s nothing good about rambling about a technology or a pattern
you used in 1995, it just says you couldn’t adapt to newer, better ways and that
you’re obsolete.
25. Don’t let the technology limitations dictate your vision of how your product should be,
bend the technology to follow your vision not the other way around.
26. Every project is an opportunity for you to:

 Make someone’s life better (both end users and engineers).


 Start new friendships, yes software engineering is a social practice at a very special
intellectual sphere.
 Learn something new and trying something different.
 Teaching others and sharing your experience

These principles are going to grow as I grow and build more software and get to learn and
interact with more teams, I encourage you to build your own, seek wisdom in your life and
evolve!

Engineer your way out of all the problems you have, because engineering is what you do for
a living! It’s what you’re good at, stay sharp and keep learning!

With love,

Hassan

You might also like