The Standard Architecture of DotNet Applications
The Standard Architecture of DotNet Applications
The Standard Architecture of DotNet Applications
This Standard is an appeal to the engineers all over the world, to read through it and
make extracts of their experiences and knowledge to enrich an engineering Standard
worthy of the industry of software. We live today knowing the origins of the earth,
of man and all the animals. We know how hot boiling water is. How long a yard. Our
ships' masters know the precise measurements of latitude and longitude. Yet we have
neither chart nor compass to guide us through the wide sea of code. The time has
come to accord this great craft of ours the same dignity and respect as the other
standards defined by science.
The value of this Standard is so immense to those who are still finding their way in
this industry, or even those who lost their way to guide them towards a better future.
But more importantly The Standard is written for everyone, equally, to inspire every
engineer or engineer-to-be to look forward to focusing on what matters the most
about engineering. its purpose not its technicalities. I have realized when engineers
have any form of standard, they start focusing more on what can be accomplished in
our world today. And when a team of engineers follow some form of standard, their
energy and focus become more about what can be accomplished not how it should
be accomplished.
I collected then authored this Standard hoping it will eliminate much of the
confusion, and focus the efforts of the engineers on what matters the most. Using
technology as a mean to have higher purposes and establish its equivalent of goals.
Designing software has come along way, and it has proven itself to be one of the
most powerful tools a person could have today. This craft deserves a proper way to
be introduced to the world, and to be educated to the youth.
The Standard is also a work of love to the rest of the world. Driven and written with
a passion to enhance the engineering experience and producing efficient, rugged,
configurable, pluggable and reliabale systems that can withstand any challenges or
changes as it occurs almost on daily basis in our industry.
1 Brokers
1.0 Introduction
Brokers play the role of a liaison between the business logic and the outside world.
They are wrappers around any external libraries, resources or APIs to satisfy a local
interface for the business to interact with these resources without having to be tightly
coupled with any particular resources or external library implementation.
Brokers in general are meant to be disposable and replaceable - they are built with
the understanding that technology evolves and changes all the time and therefore
they shall be at some point in time in the lifecycle of a given application be replaced
with a modern technology that gets the job done faster.
But Brokers also ensure that your business is pluggable by abstracting away any
specific external resource dependencies from what your software is actually trying
to accomplish.
In any given application, mobile, desktop, web or simply just an API - brokers
usually reside at the "tail" of any app - that's because they are the last point of contact
between our custom code and the outside world.
Whether the outside world in this instance is just simply a local storage in memory,
or an entirely independent system that resides behind an API, they all have to reside
behind the Brokers in any application.
In the following low-level architecture for a given API - Brokers reside between our
business logic and the external resource:
1.2 Characteristics
There are few simple rules that govern the implementation of any broker - these rules
are:
Brokers have to satisfy a local contract. they have to implement a local interface to
allow the decoupling between their implementation and the services that consume
them.
For instance, given that we have a local contract IStorageBroker that requires an
implementation for any given CRUD operation for a local model Student - the
contract operation would be as follows:
A local contract implementation can be replaced at any point in time from utilizing
the Entity Framework as shows in the previous example, to using a completely
different technology like Dapper, or an entirely different infrastructure like an Oracle
or PostgreSQL database.
Brokers should not have any form of flow-control such as if-statements, while-loops
or switch cases - that's simply because flow-control code is considered to be business
logic, and it fits better the services layer where business logic should reside not the
brokers.
For instance, a broker method that retrieves a list of students from a database would
look something like this:
A simple fat-arrow function that calls the native EntityFramework DbSet<T> and
return a local model like Student.
Brokers are also required to handle their own configurations - they may have a
dependency injection from a configuration object, to retrieve and setup the
configurations for whichever external resource they are integrating with.
Brokers may construct an external model object based on primitive types passed from
the broker-neighboring services.
For instance, in e-mail notifications broker, input parameters for
a .Send(...) function for instance require the basic input parameters such as the
subject, content or the address for instance, here's an example:
The primitive input parameters will ensure there are no strong dependencies between
the broker-neighboring services and the external models. Even in situations where
the broker is simply a point of integration between your application and an external
RESTful API, it's very highly recommended that you build your own native models
to reflect the same JSON object sent or returned from the API instead of relying on
nuget libraries, dlls or shared projects to achieve the same goal.
The contracts for the brokers shall remain as generic as possible to indicate the
overall functionality of a broker, for instance we say IStorageBroker instead
of ISqlStorageBroker to indicate a particular technology or infrastructure.
But if the concrete implementations target the same model and business value, then
a diversion to the technology might be more befitting in this case, for instance in the
case of an IStorageBroker two different concrete implementations would
be SqlStorageBroker and MongoStroageBroker this case is very possible in situations
where environment costs are reduced in lower than production infrastructure for
instance.
1.2.6 Language
Brokers speak the language of the technologies they support. For instance, in a
storage broker, we say SelectById to match the SQL Select statement and in a queue
broker we say Enqueue to match the language.
If a broker is supporting an API endpoint, then it shall follow the RESTFul operations
language, such as POST, GET or PUT, here's an example:
Brokers cannot call other brokers. that's simply because brokers are the first point of
abstraction, they require no additional abstractions and no additional dependencies
other than a configuration access model.
Brokers can't also have services as dependencies as the flow in any given system
shall come from the services to the brokers and not the other way around.
Even in situations where a microservice has to subscribe to a queue for instance,
brokers will pass forward a listener method to process incoming events, but not call
the services that provide the processing logic.
The general rule here then would be, that brokers can only be called by services, and
they can only call external native dependencies.
1.3 Organization
Brokers that support multiple entities such as Storage brokers should leverage partial
classes to break down the responsibilities per entities.
For instance, if we have a storage broker that provides all CRUD operations for
both Student and Teacher models, then the organization of the files should be as
follows:
IStorageBroker.cs
o IStorageBroker.Students.cs
o IStorageBroker.Teachers.cs
StorageBroker.cs
o StorageBroker.Students.cs
o StorageBroker.Teachers.cs
But brokers files and folders naming convention strictly focuses on the plurality of
the entities they support and the singularity for the overall resource being supported.
namespace OtripleS.Web.Api.Brokers.Storages
{
...
}
And we say:
namespace OtripleS.Web.Api.Brokers.Queues
{
...
}
In most of the applications built today, there are some common Brokers that are
usually needed to get an enterprise application up and running - some of these
Brokers are like Storage, Time, APIs, Logging and Queues.
Some of these brokers interact with existing resources on the system such as time to
allow broker-neighboring services to treat time as a dependency and control how a
particular service would behave based on the value of time at any point in the past,
present or the future.
Entity brokers are the brokers providing integration points with external resources
that the system needs to fulfill a business requirements.
For instance, entity brokers are brokers that integrate with storage, providing
capabilities to store or retrieve records from a database.
entity brokers are also like queue brokers, providing a point of integration to push
messages to a queue for other services to consume and process to fulfill their business
logic.
Support brokers are general purpose brokers, they provide a functionality to support
services but they have no characteristic that makes them different from one system
or another.
Time brokers don't really target any specific entity, and they are almost the same
across many systems out there.
Unlike Entity Brokers - support brokers may be called across the entire business
layer, they may be called on foundation, processing, orchestration, coordination,
management or aggregation services. that's because logging brokers are required as
a supporting component in the system to provide all the capabilities needed for
services to log their errors or calculate a date or any other supporting functionality.
You can find real-world examples of brokers in the OtripleS project here.
1.5 Implementation
Here's a real-life implementation of a full storage broker for all CRUD operations
for Student entity:
For IStorageBroker.cs:
namespace OtripleS.Web.Api.Brokers.Storage
{
public partial interface IStorageBroker
{
}
}
For StorageBroker.cs:
using System;
using EFxceptions.Identity;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using OtripleS.Web.Api.Models.Users;
namespace OtripleS.Web.Api.Brokers.Storage
{
public partial class StorageBroker : EFxceptionsContext, IStorageBroker
{
private readonly IConfiguration configuration;
For IStorageBroker.Students.cs:
using System;
using System.Linq;
using System.Threading.Tasks;
using OtripleS.Web.Api.Models.Students;
namespace OtripleS.Web.Api.Brokers.Storage
{
public partial interface IStorageBroker
{
public ValueTask<Student> InsertStudentAsync(Student student);
public IQueryable<Student> SelectAllStudents();
public ValueTask<Student> SelectStudentByIdAsync(Guid studentId);
public ValueTask<Student> UpdateStudentAsync(Student student);
public ValueTask<Student> DeleteStudentAsync(Student student);
}
}
For StorageBroker.Students.cs:
using System;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using OtripleS.Web.Api.Models.Students;
namespace OtripleS.Web.Api.Brokers.Storage
{
public partial class StorageBroker
{
public DbSet<Student> Students { get; set; }
return studentEntityEntry.Entity;
}
return studentEntityEntry.Entity;
}
return studentEntityEntry.Entity;
}
}
}
1.6 Summary
Brokers are the first layer of abstraction between your business logic and the outside
world, but they are not the only layer of abstraction. simply because there will still
be few native models that leak through your brokers to your broker-neighboring
services which is natural to avoid doing any mappings outside of the realm of logic,
in our case here the foundation services.
For instance, in a storage broker, regardless what ORM you are using, some native
exceptions from your ORM (EntityFramework for instance) will occur, such
as DbUpdateException or SqlException - in that case we need another layer of
abstraction to play the role of a mapper between these exceptions and our core logic
to convert them into local exception models.
This responsibility lies in the hands of the broker-neighboring services, I also call
them foundation services, these services are the last point of abstraction before your
core logic, in which everything becomes nothing but local models and contracts.
1.7 FAQs
During the course of time, there have been some common questions that arose by the
engineers that I had the opportunity to work with throughout my career - since some
of these questions reoccurred on several occasions, I thought it might be useful to
aggregate all of them in here for everyone to learn about some other perspectives
around brokers.
Not exactly, at least from an operational standpoint, brokers seems to be more generic
than repositories.
Repositories usually target storage-like operations, mainly towards databases. but
brokers can be an integration point with any external dependency such as e-mail
services, queues, other APIs and such.
A more similar pattern for brokers is the Unit of Work pattern, it mainly focuses on
the overall operation without having to tie the definition or the name with any
particular operation.
All of these patterns in general try to achieve the same SOLID principles goal, which
is the separation of concern, dependency injection and single responsibility.
But because SOLID are principles and not exact guidelines, it's expected to see all
different kinds of implementations and patterns to achieve that principle.
1.7.1 Why can't the brokers implement a contract for methods that return an
interface instead of a concrete model?
That would be an ideal situation, but that would also require brokers to do a
conversion or mapping between the native models returned from the external
resource SDKs or APIs and the internal model that adheres to the local contract.
Doing that on the broker level will require pushing business logic into that realm,
which is outside of the purpose of that component completely.
Brokers do not get unit tested because they have no business logic in them, they may
be a part of an acceptance or an integration test, but certainly not a part of unit level
tests - simply because they don't contain any business logic in them.
1.7.2 If brokers were truly a layer of abstraction from the business logic, how come
we allow external exceptions to leak through them onto the services layer?
Brokers are only the the first layer of abstraction, but not the only one - the broker
neighboring services are responsible for converting the native exceptions occurring
from a broker into a more local exception model that can be handled and processed
internally within the business logic realm.
Full pure local code starts to occur on the processing, orchestration, coordination and
aggregation layers where all the exceptions, all the returned models and all operations
are localized to the system.
1.7.3 Why do we use partial classes for brokers who handle multiple entities?
Since brokers are required to own their own configurations, it made more sense to
partialize when possible to avoid reconfiguring every storage broker for each entity.
Services in general are the containers of all the business logic in any given software - they are
the core component of any system and the main component that makes one system different
from another.
Our main goal with services is that to keep them completely agnostic from specific technologies
or external dependencies.
Any business layer is more compliant with The Standard if it can be plugged into any other
dependencies and exposure technologies with the least amount of integration efforts possible.
When we say business logic, we mainly refer to three main categories of operations, which are
validation, processing and integration.
Validations focus on ensuring that the incoming or outgoing data match a particular set of rules,
which can be structural, logical or external validations, in theat exact order of priority. We will
go in details about this in the upcoming sections.
2.0.0.1 Processing
Processing mainly focuses on the flow-control, mapping and computation to satisfy a business
need - the processing operations specifically is what distinguishes one service from another,
and in general one software from another.
2.0.0.2 Integration
Finally, the integration process is mainly focused on retrieving or pushing data from or to any
integrated system dependencies.
Every one of these aspects will be discussed in details in the upcoming chapter, but the main
thing that should be understood about services is that they should be built with the intent to be
pluggable and configurable so they are easily integrated with any technology from a
dependency standpoint and also be easily plugged into any exposure functionality from an API
perspective.
But services have several types based on where they stand in any given architecture, they fall
under three main categories, which are: validators, orchestrators and aggregators.
2.0.1.0 Validators
These services' main responsibility is to add a validation layer on top of the existing primitive
operations such as the CRUD operations to ensure incoming and outgoing data is validated
structurally, logically and externally before sending the data in or out of the system.
2.0.1.1 Orchestrators
Orchestrator services are the core of the business logic layer, they can be processors,
orchestrators, coordinators or management services depending on the type of their
dependencies.
Orchestrators services are the decision makers within any architecture, they are the owners of
the flow-control in any system and they are the main component that makes one application or
software different from the other.
Orchestrator services are also meant to be built and live longer than any other type of services
in the system.
2.0.1.2 Aggregators
Aggregators are the gatekeepers of the business logic layer, they ensure the data exposure
components (like API controllers) are interacting with only one point of contact to interact with
the rest of the system.
Aggregators in general don't really care about the order in which they call the operations that is
attached to them, but sometimes it becomes a necessity to execute a particular operation, such
as creating a student record before assigning a library card to them.
We will discuss each and every type of these services in detail in the next chapters.
There are several rules that govern the overall architecture and design of services in any system.
These rules ensure the overall readability, maintainability, configurability of the system - in that
particular order.
2.0.2.0 Do or Delegate
Every service should either do the work or delegate the work but not both.
For instance, a processing service should delegate the work of persisting data to a foundation
service and not try to do that work by itself.
For Orchestrator services, their dependencies of services (not brokers) should be limited to 2 or
3 but not 1 and not 4 or more.
The dependency on one service denies the very definition of orchestration. that's because
orchestration by definition is the combination between multiple different operations from
different sources to achieve a higher order of business-logic.
The Florance pattern also ensures the balance and symmetry of the overall architecture as well.
For instance, you can't orchestrate between a foundation and a processing services, it causes a
form of unbalance in your architecture, and an uneasy disturbance in trying to combine one
unified statement with the language each service speaks based on their level and type.
The only type of services that is allowed to violate this rule are the aggregators, where the
combination and the order of services or their calls doesn't have any real impact.
We will be discussing the Florance pattern a bit further in detail in the upcoming sections of
The Standard.
API controllers, UI components or any other form of data exposure from the system should
have one single point of contact with the business-logic layer.
For instance, an API endpoint that offer endpoints for persisting and retrieving student data,
should not have multiple integrations with multiple services, but rather one service that offers
all of these features.
For all services, they have to maintain a single contract in terms of their return and input types,
except if they were primitives.
For instance, a service that provides any kind of operations for an entity type Student - should
not return from any of it's methods any other entity type.
You may return an aggregation of the same entity whether it's custom or native such
as List<Student> or AggregatedStudents models, or a premitive type like getting students
count, or a boolean indicating whether a student exists or not but not any other non-primitive
or non-aggregating contract.
For input parameters a similar rule applies - any service may receive an input parameter of the
same contract or a virtual aggregation contract or a primitive type but not any other contract,
that simply violates the rule.
This rule enforces the focus of any service to maintain it's responsibility on a single entity and
all it's related operations.
Once a service returns a different contract, it simply violates it's own naming convention like
a StudentOrchestrationService returning List<Teacher> - and it starts falling into the
trap of being called by other services from a completely different data pipelines.
For primitive input parameters, if they belong to a different entity model, that is not necessarily
a reference on the main entity, it begs the question to orchestrate between two processing or
foundation services to maintain a unified model without break the pure-contracting rule.
Every service is responsible for validating it's inputs and outputs. you should not rely on
services up or downstream to validate your data.
This is a defensive programming mechanism to ensure that in case of swapping implmentations
behind contracts, the responsibility of any given services wouldn't be affected if down or
upstream services decided to pass on their validations for any reason.
Student
public class Student
{
public Guid Id {get; set;}
public string Name {get; set;}
}
LibraryCard
public class LibraryCard
{
public Guid Id {get; set;}
public Guid StudentId {get; set;}
}
await AssignStudentLibraryCardAsync(student);
return registeredStudent;
}
return await
this.libraryCardProcessingService.AddLibraryCardAsync(studentLibraryCard);
}
As you can see above, a valid student id is required to ensure a successful mapping to
a LibraryCard - and since the mapping is the orchestrator's responsibility, we are required to
ensure that the input student and it's id is in good shape before proceeding with the orchestration
process.
2.1 Foundation Services (Broker-
Neighboring)
2.1.0 Introduction
Foundation services are the first point of contact between your business logic and the
brokers.
The broker-neighboring services reside between your brokers and the rest of your
application, on the left side higher-order business logic processing services,
orchestration, coordination, aggregation or management services may live, or just
simply a controller, a UI component or any other data exposure technology.
2.1.2 Characteristics
Foundation services in general focus more on validations than anything else - simply
because that's their purpose, to ensure all incoming and outgoing data through the
system is in a good state for the system to process it saftely without any issues.
2.1.2.0 Pure-Primitive
But they offer a validation and exception handling (and mapping) wrapper around
the dependency calls, here's an example:
In the above method, you can see ValidateStudent function call preceded by
a TryCatch block. The TryCatch block is what I call Exception Noise Cancellation
pattern, which we will discuss soon in this very section.
But the validation function ensures each and every property in the incoming data is
validated before passing it forward to the primitive broker operation, which is
the InsertStudentAsync in this very instance.
Foundation services should not integrate with more than one entity broker of any
kind simply because it will increase the complexity of validation and orchestration
which goes beyond the main purpose of the service which is just simply validation.
We push this responsibility further to the orchestration-type services to handle it.
In general, most of the CRUD operations shall be converted from a storage lanaugage
to a business language, and the same goes for non-storage operations such as Queues,
for instance we say PostQueueMessage but on the business layer we shall
say EnqueueMessage.
Since the CRUD operations the most common ones in every system, our mapping to
these CRUD operations would be as follows:
Brokers Services
Insert Add
Select Retrieve
Update Modify
Delete Remove
As we move forward towards higher-order business logic services, the language of
the methods beings used will lean more towards a business language rather than a
technology language as we will see in the upcoming sections.
2.1.3 Responsibilities
Broker-neighboring services play two very important roles in any system. The first
and most important role is to offer a layer of validation on top of the existing
primitive operations a broker already offers to ensure incoming and outgoing data is
valid to be processed or persisted by the system. The second role is to play the role
of a mapper of all other native models and contracts that may be needed to completed
any given operation while interfacing with a broker. Foundation services are the last
point of abstraction between the core business logic of any system and the rest of the
world, let's discuss these roles in detail.
2.1.3.0 Validation
Foundation services are required to ensure incoming and outgoing data from and to
the system are in a good state - they play the role of a gatekeeper between the system
and the outside world to ensure the data that goes through is structurally, logically
and externally valid before performing any further operations by upstream services.
The order of validations here is very intentional. Structural validations are the
cheapest of all three types. They ensure a particular attribute or piece of data in
general doesn't have a default value if it's required. the opposite of that is the logical
validations, where attributes are compared to other attributes with the same entity or
any other. Additional logical validations can also include a comparison with a
constant value like comparing a student enrollment age to be no less than 5 years of
age. Both strucural and logical validations come before the external. As we said, it's
simply because we don't want to pay the cost of communicating with an external
resource including latency tax if our request is not in a good shape first. For instance,
we shouldn't try to post some Student object to an external API if the object is null.
Or if the Student model is corrupted or invalid logically in any way, shape or form.
2.1.3.0.0 Structural Validations
Validations are three different layers. the first of these layers is the structural
validations. to ensure certain properties on any given model or a primitive type are
not in an invalid structural state.
For instance, a property of type String should not be empty, null or white space.
another example would be for an input parameter of an int type, it should not be at
it's default state which is 0 when trying to enter an age for instance.
The structural validations ensure the data is in a good shape before moving forward
with any further validations - for instance, we can't possibly validate a student has
the minimum number of characters in their names if their first name is structurally
invalid.
Structural validations play the role of identifying the required properties on any
given model, and while a lot of technologies offer the validation annotations, plugins
or libraries to globally enforce data validation rules, I choose to perform the
validation programmatically and manually to gain more control of what would be
required and what wouldn't in a TDD fashion.
The issue with some of the current implementations on structural and logical
validations on data models is that it can be very easily changed under the radar
without any unit tests firing any alarms. Check this example for instance:
public Student
{
[Required]
public string Name {get; set;}
}
The above example can be very enticing at a glance from an engineering standpoint.
All you have to do is decorate your model attribute with a magical annotation and
then all of the sudden your data is being validated.
The problem here is that this pattern combines two different responsibilities or more
together all in the same model. Models are supposed to be just a representation of
objects in reality - nothing more and nothing less. Some engineers call them anemic
models which focuses the responsbility of every single model to only represent the
attributes of the real world object it's trying to simulate without any additional details.
But the annotated models now try to inject business logic into their very definitions.
This business logic may or may not be needed across all services, brokers or exposing
components that uses them.
Structural validations on models may seem like extra work that can be avoided with
magical decorations. But in the case of trying to diverge slightly from these
validations into a more customized validations, now you will see a new anti-pattern
emerge like custom annotations that may or may not be detectable through unit tests.
Because I truly believe in the importance of TDD, I am going to start showing the
implementation of structural validations by writing a failing test for it first.
[Fact]
public async void
ShouldThrowValidationExceptionOnRegisterWhenIdIsInvalidAndLogItAsync()
{
// given
Student randomStudent = CreateRandomStudent();
Student inputStudent = randomStudent;
inputStudent.Id = Guid.Empty;
var expectedStudentValidationException =
new StudentValidationException(invalidStudentException);
// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(inputStudent);
// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());
this.loggingBrokerMock.Verify(broker =>
broker.LogError(It.Is(SameExceptionAs(expectedStudentValidationExce
ption))),
Times.Once);
this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Never);
this.dateTimeBrokerMock.VerifyNoOtherCalls();
this.loggingBrokerMock.VerifyNoOtherCalls();
this.storageBrokerMock.VerifyNoOtherCalls();
}
In the above test, we created a random student object then assigned the an invalid Id
value of Guid.Empty to the student Id.
The exception is required to briefly describe the whats, wheres and whys of the
validation operation. in our case here the what would be the validation issue
occurring, the where would be the Student service and the why would be the property
value.
The message in the outer-validation above indicates that the issue is in the input, and
therefore it requires the input submitter to try again as there are no actions required
from the system side to be adjusted.
Now, let's look at the other side of the validation process, which is the
implementation. Structural validations always come before each and every other type
of validations. That's simply because structural validations are the cheapest from an
execution and asymptotic time perspective. For instance, It's much cheaper to
validation an Id is invalid structurally, than sending an API call across to get the
exact same answer plus the cost of latency. this all adds up when multi-million
requests per second start flowing in. Structural and logical validations in general live
in their own partial class to run these validations, for instance, if our service is
called StudentService.cs then a new file should be created with the
name StudentService.Validations.cs to encapsulate and visually abstracts away the
validation rules to ensure clean data are coming in and going out. Here's how an Id
validation would look like:
StudentService.Validations.cs
private void ValidateStudent(Student student)
{
switch(student)
{
case {} when IsInvalid(student.Id):
throw new InvalidStudentException(
parameterName: nameof(Student.Id),
parameterValue: student.Id);
}
}
That's the reason why we decided to implement a private static method IsInvalid to
abstract away the details of what determines a property of type Guid is invalid or not.
And as we move further in the implementation, we are going to have to implement
multiple overloads of the same method to validate other value types structurally and
logically.
The purpose of the ValidateStudent method is to simply set up the rules and take an
action by throwing an exception if any of these rules are violated. there's always an
opportunity to aggregated the violation errors rather than throwing too early at the
first sign of structural or logically validation issue to be detected.
Now, with the implementation above, we need to call that method to structurally and
logically validate our input. let's make that call in our RegisterStudentAsync method
as follows:
StudentService.cs
public ValueTask<Student> RegisterStudentAsync(Student student) =>
TryCatch(async () =>
{
ValidateStudent(student);
return await this.storageBroker.InsertStudentAsync(student);
});
At a glance, you will notice that our method here doesn't necessarily handle any type
of exceptions at the logic level. that's because all the exception noise is also
abstracted away in a method called TryCatch.
TryCatch methods in general live in another partial class, and an entirely new file
called StudentService.Exceptions.cs - which is where all exception handling and
error reporting happens as I will show you in the following example.
StudentService.Exceptions.cs
private delegate ValueTask<Student> ReturningStudentFunction();
return studentValidationException;
}
In a TryCatch method, we can add as many inner and external excetpions as we want
and map them into local exceptions for upstream services not to have a strong
dependency on any particular libraries or external resource models, which we will
talk about in detail when we move on to the Mapping responsibility of broker-
neighboring (foundation) services.
Logical validations are the second in order to structural validations. their main
responsibility by definition is to logically validate whether a structurally valid piece
of data is logically valid. For instance, a date of birth for a student could be
structurally valid by having a value of 1/1/1800 but logically, a student that is over
200 years of age is an impposibility.
The most common logical validations are validations for audit fields such
as CreatedBy and UpdatedBy - it's logically impossible that a new record can be
inserted with two different values for the authors of that new record - simply because
data can only be inserted by one person at a time.
Let's talk about how we can test-drive and implement logical validations:
In the common case of testing logical validations for audit fields, we want to throw
a validation exception that the UpdatedBy value is invalid simply because it doesn't
match the CreatedBy field.
[Fact]
public async Task
ShouldThrowValidationExceptionOnRegisterIfUpdatedByNotSameAsCreatedByAndLog
ItAsync()
{
// given
Student randomStudent = CreateRandomStudent();
Student inputStudent = randomStudent;
inputStudent.UpdatedBy = Guid.NewGuid();
var expectedStudentValidationException =
new StudentValidationException(invalidStudentException);
// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(inputStudent);
// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());
this.loggingBrokerMock.Verify(broker =>
broker.LogError(It.Is(
SameExceptionAs(expectedStudentValidationException))),
Times.Once);
this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Never);
this.loggingBrokerMock.VerifyNoOtherCalls();
this.dateTimeBrokerMock.VerifyNoOtherCalls();
this.storageBrokerMock.VerifyNoOtherCalls();
}
In the above test, we have changed the value of the UpdatedBy field to ensure it
completely differs from the CreatedBy field - now we expect
an InvalidStudentException with the CreatedBy to be the reason for this validation
exception to occur.
Just like we did in the structural validations section, we are going to add more rules
to our validation switch case as follows:
StudentService.Validations.cs
private void ValidateStudent(Student student)
{
switch(student)
{
case {} when IsNotSame(student.CreatedBy, student.UpdatedBy):
throw new InvalidStudentException(
parameterName: nameof(Student.UpdatedBy),
parameterValue: student.UpdatedBy);
}
}
Everything else in
both StudentService.cs and StudentService.Exceptions.cs continues to be exactly
the same as we've done above in the structural validations.
Logical validations exceptions, just like any other exceptions that may occur are
usually non-critical. However, it all depends on your business case to determine
whether a particular logical, structural or even a dependency validation are critical
or not, this is when you might need to create a special class of exceptions, something
like InvalidStudentCriticalException then log it accordingly.
The last type of validations that are usually performed by foundation services is
dependency validations. I define dependency validations as any form of validation
that requires calling an external resource to validate whether a foundation service
should proceed with processing incoming data or halt with an exception.
Dependency validations can occur because you called an external resources and it
returned an error, or returned a value that warrants raising an error. For instance, an
API call might return a 404 code, and that's interpreted as an exception if the input
was supposed to correspond to an existing object.
But dependency validation exception can also occur if the returned value even if it
was a successful call did not match the expectation, such as an empty list returned
from an API call when trying to insert a new coach of a team - if there is no team,
there can be no coach for instance. The foundation service in this case will be
required to raise a local exception to explain the issue, something
like NoTeamMembersFoundException or something of that nature.
A more common example is when a particular input entity is using the same id as an
existing entity in the system. In a relational database world, a duplicate key exception
would be thrown. In a RESTful API scneario, programmatically applying the same
concept also achieves the same goal for API validations assuming the granularity of
the system being called weaken the referencial integrity of the overall system data.
There are situations where the faulty response can be expressed in a fashion other
than exceptions, but we shall touch on that topic in a more advanced chapters of this
Standard.
Let's assume our student model uses an Id with the type Guid as follows:
public class Student
{
public Guid Id {get; set;}
public string Name {get; set;}
}
[Fact]
public async void
ShouldThrowDependencyValidationExceptionOnRegisterIfStudentAlreadyExistsAnd
LogItAsync()
{
// given
Student someStudent = CreateRandomStudent();
string randomMessage = GetRandomMessage();
string exceptionMessage = randomMessage;
var duplicateKeyException = new
DuplicateKeyException(exceptionMessage);
var alreadyExistsStudentException =
new AlreadyExistsStudentException(duplicateKeyException);
var expectedStudentValidationException =
new
StudentDependencyValidationException(alreadyExistsStudentException);
this.storageBrokerMock.Setup(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()))
.ThrowsAsync(duplicateKeyException);
// when
ValueTask<Student> registerStudentTask =
this.studentService.RegisterStudentAsync(someStudent);
// then
await Assert.ThrowsAsync<StudentValidationException>(() =>
registerStudentTask.AsTask());
this.storageBrokerMock.Verify(broker =>
broker.InsertStudentAsync(It.IsAny<Student>()),
Times.Once);
this.loggingBrokerMock.Verify(broker =>
broker.LogError(It.Is(
SameExceptionAs(expectedStudentValidationException))),
Times.Once);
this.storageBrokerMock.VerifyNoOtherCalls();
this.loggingBrokerMock.VerifyNoOtherCalls();
}
This rules ensures that only the local validation exception is what's being propagated
not it's native exception from a storage system or an API or any other external
dependency.
Which is the case that we have here with our AlreadyExistsStudentException and
it's StudentDependencyValidationException - the native exception is completely
hidden away from sight, and the mapping of that native exception and it's inner
message is what's being communicated to the end user. This gives the engineers the
power to control what's being communicated from the other end of their system
instead of letting the native message (which is subject to change) propagate to the
end-users.
To ensure the aforementioned test passed, we are going to need few models. For
the AlreadyExistsStudentException the implementation would be as follows:
Now, let's go to the implementation side, let's start with the exception handling logic:
StudentService.Exceptions.cs
private delegate ValueTask<Student> ReturningStudentFunction();
...
private StudentDependencyValidationException
CreateAndLogDependencyValidationException(Exception exception)
{
var studentDependencyValidationException = new
StudentDependencyValidationException(exception);
this.loggingBroker.LogError(studentDependencyValidationException);
return studentDependencyValidationException;
}
We created the local inner exception in the catch block of our exception handling
process to allow the reusability of our dependency validation exception method for
other situations that require that same level of external exceptions.
Everything else stays the same for the referecing of the TryCatch method in
the StudentService.cs file.
2.1.3.1 Mapping
The second responsibility for a foundation service is to play the role of a mapper both
ways between local models and non-local models. For instance, if you are leveraging
an email service that provides it's own SDKs to integrate with, and your brokers are
already wrapping and exposing the APIs for that service. your foundation service is
required to map the inputs and outputs of the broker methods into local models. the
same situation and more commonly between native non-local exceptions such as the
ones we mentioned above with the dependency validation situation, the same aspect
applies to just dependency errors or service errors as we will discuss shortly.
Its very common for modern applications to require integration at some point with
external services. these services can be local to the overall architecture or distributed
system where the application lives, or it can be a 3rd party provider such as some of
the popular email services for instance. External services providers invest a lot of
effort in developing fluent APIs, SDKs and libraries in every common programming
language to make it easy for the engineers to integrate their applications with that
3rd party service. For instance, let's assume a third party email service provider is
offering the following API through their SDKs:
Let's consider the model EmailMessage is a native model, it comes with the email
service provider SDK. your brokers might offer a wrapper around this API by
building a contract to abstract away the functionality but can't do much with the
native models that are passed in or returned out of these functionality. therefore our
brokers interface would look something like this:
As we said before, the brokers here have done their part of abstraction by pushing
away the actual implementation and the dependencies of the
native EmailServiceProvider away from our foundation serviecs. But that's only
50% of the job, the abstraction isn't quite fully complete yet until there are no tracks
of the native EmailMessage model. This is where the foundation services come in to
do a test-driven operation of mapping between the native non-local models and your
application's local models. therefore its very possible to see a mapping function in a
foundation service to abstract away the native model from the rest of your business
layer services.
Your foundation service then will be required to support a new local model, let's call
it Email. your local model's property may be identical to the external
model EmailMessage - especially on a primitive data type level. But the new model
would be the one and only contract between your pure business logic layer
(processing, orchestration, coordination and management services) and your hybrid
logic layer like the foundation services. Here's a code snippet for this operation:
return MapToEmail(sentEmailMessage);
}
Depending on whether the returned message has a status or you would like to return
the input message as a sign of a successful operation, both practices are valid in my
Standard. It all depends on what makes more sense to the operation you are trying to
execute. the code snippet above is an ideal scenario where your code will try to stay
true to the value passed in as well as the value returned with all the necessary
mapping included.
Just like the non-local models, exceptions that are either produced by the external
API like the EntityFramework models DbUpdateException or any other has to be
mapped into local exception models. Handling these non-local exceptions that early
before entering the pure-business layer components will prevent any potential tight
coupling or dependency on any external model. as it may be very common, that
exceptions can be handled differently based on the type of exception and how we
want to deal with it internally in the system. For instance, if we are trying to handle
a UserNotFoundException being thrown from using Microsoft Graph for instance, we
might not necessarily want to exist the entire procedure. we might want to continue
by adding a user in some other storage for future Graph submittal processing.
External APIs should not influence whether your internal operation should halt or
not. and therefore handling exceptions on the Foundation layer is the guarantee that
this influence is limited within the borders of our external resources handling area of
our application and has no impact whatsoever on our core business processes. The
following illustration should draw the picture a bit clearer from that perspective:
Here's some common scenarios for mapping native or inner local exceptions to outer
exceptions:
Log
Exception Wrap Inner Exception With Wrap With
Level
NullStudentException - StudentValidationException Error
InvalidStudentException - StudentValidationException Error
SqlException - StudentDependencyException Critical
NotFoundStudentException - StudentValidationException Error
DuplicateKeyException AlreadyExistsStudentException StudentDependencyValidationException Error
ForeignKeyConstraintConflictException InvalidStudnetReferenceException StudentDependencyValidationException Error
DbUpdateConcurrencyException LockedStudentException StudentDependencyValidationException Error
DbUpdateException - StudentDependencyException Error
Exception - StudentServiceException Error
2.2 Processing Services (Higher-Order
Business Logic)
2.2.0 Introduction
Processing services are the layer where a higher order of business logic is
implemented. they may combine (or orchestrate) two primitive-level functions from
their corresponding foundation service to introduce a newer functionality. they may
also call one primitive function and change the outcome with a little bit of added
business logic. and sometimes processing services are there as a pass-through to
introduce balance to the overall architecture.
IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();
When used, Processing services live between foundation services and the rest of the
application. they may not call Entity or Business brokers, but they may call Utility
brokers such as logging brokers, time brokers and any other brokers that offer
supporting functionality and not specific to any particular business logic. here's a
visual of where processing services are located on the map of our architecture:
On the right side of a Processing service lies all the non-local models and
functionality, whether it's through the brokers, or the models that the foundation
service is trying to map into local models. On the left side of Processing services is
pure local functionality, models and architecture. Starting from the Processing
services themselves, there should be no trace or track of any native or non-local
models in the system.
2.2.2 Charachteristics
2.2.2.0 Language
The language used in processing services define the level of complexity and the
capabilities it offers. Usually, processing services combine two or more primitive
operations from the foundation layer to create a new value.
RetrieveStudentById + AddStudentAsync +
UpsertStudentAsync
ModifyStudentAsync
VerifyStudentExists RetrieveAllStudents
As you can see, the combination of primitive functions processing services do might
also include adding an additional layer of logic on top of the existing primitive
operation. For instance, VerifyStudentExists takes advantage of
the RetrieveAllStudents primitive function, and then adds a boolean logic to verify
the returned student by and Id from a query actually exists or not before returning
a boolean.
2.2.2.0.1 Pass-Through
Processing services may borrow some of the terminology a foundation service may
use. for instance, in a pass-through scenario, a processing service with be as simple
as AddStudentAsync. we will discuss the architecture-balancing scenarios later in this
chapter. Unlike Foundation services, Processing services are required to have the
identifer Processing in their names. for instance, we say StudentProcessingService.
More importantly Processing services must include the name of the entity that is
supported by their corresponding Foundation service. For instance, if a Processing
service is dependant on a TeacherService, then the Processing service name must
be TeacherProcessingService.
2.2.2.1 Dependencies
2.2.2.2 One-Foundation
Processing services can interact with one and only one Foundation service. In fact
without a foundation service there can never be a Processing layer. and just like we
mentioned above about the language and naming, Processing services take on the
exact same entity name as their Foundation dependency. For instance, a processing
service that handles higher-order business-logic for students will communicate with
nothing but its foundation layer, which would be StudentService for instance. That
means that processing services will have one and only one service as a dependency
in its construction or initiation as follows:
Unlike the Foundation layer services, Processing services only validate what it needs
from it's input. For instance, if a Processing service is required to validate a student
entity exists, and it's input model just happens to be an entire Student entity, it will
only validate that the entity is not null and that the Id of that entity is valid. the rest
of the entity is out of the concern of the Processing service. Processing services
delegate full validations to the layer of services that is concerned with that which is
the Foundation layer. here's an example:
IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();
bool isStudentExists = allStudents.Any(retrievedStudent =>
retrievedStudent.Id == student.Id);
Processing services are also not very concerned about outgoing validations except
for what it's going to use within the same routine. For instance, if a Processing service
is retrieving a model, and it's going to use this model to be passed to another
primitive-level function on the Foundation layer, the Processing service will be
required to validate that the retrieved model is valid depending on which attributes
of the model it uses. For Pass-through scenarios however, processing services will
delegate the outgoing validation to the foundation layer.
2.2.3 Responsibilities
Higher-order business logic are functions that are above primitive. For
instance, AddStudentAsync function is a primitive function that does one thing and
one thing only. But higher-order logic is when we try to provide a function that
changes the outcome of a single primitive function like VerifyStudentExists which
returns a boolean value instead of the entire object of the Student, or a combination
of multiple primitive functions such as EnsureStudentExistsAsync which is a
function that will only add a given Student model if and only if the aforementioned
object doesn't already exist in storage. here's some examples:
2.2.3.0.0 Shifters
The shifter pattern in a higher order business logic is when the outcome of a particular
primitive function is changed from one value to another. ideally a primitive type such
as a bool or int not a completely different type as that would violate the purity
principle. For instance, in a shifter pattern, we want to verify if a student exists or
not. We don't really want the entire object, but just whether it exists in a particular
system or not. Now, this seems like a case where we only need to interact with one
and only one foundation service and we are shifting the value of the outcome to
something else. Which should fit perfectly in the realm of the processing services.
Here's an example:
ValidateStudents(allStudents);
ValidateStudents(allStudents);
return allStudents.Count();
});
In the example above, we provided a function to retrieve the count of all students in
a given system. Its up to the designers of the system to determine whether to interpret
a null value retrieved for all students to be an exception case that was not expected
to happen or return a 0 all depending on how they manage the outcome. In our case
here we validate the outgoing data as much as the incoming, especially if its going
to be used within the processing function to ensure further failures do not occur for
upstream services.
2.2.3.0.1 Combinations
The combination of multiple primitive functions from the foundation layer to achieve
a higher-order business logic is one of the main responsiblities of a processing
service. As we mentioned before, some of the most popular examples is for ensuring
a particular student model exists as follows:
IQueryable<Student> allStudents =
this.studentService.RetrieveAllStudents();
Althrough processing services operate fully on local models and local contracts.
They are still required to map foundation-level services' models to their own local
models. For instance, if a foundation service is
throwing StudentValidationException then processing services will map that
exception to StudentProcessingDependencyValidationException. Let's talk about
mapping in this section.
In general, processing services are required to map any incoming or outgoing objects
with a specific model to its own. But that rule doesn't always apply to non-exception
models. For instance, if a StudentProcessingService is operating based on
a Student model, and there's no need for a special model for this service, then the
processing service may be permitted to use the exact same model from the foundation
layer.
When it comes to processing services handling exceptions from the foundation layer,
it is important to understand that exceptions in our Standard are more expressive in
their naming conventions and their role more than any other model. Exceptions here
define the what, where and why every single time they are thrown. For instance, an
exception that's called StudentProcessingServiceException indicates the entity of
the exception which is the Student entity. Then it indicates the location of the
excetpion whic is the StudentProcessing service. And lastly it indicates the reason
for that excetpion which is ServiceExcetpion indicating an internal error to the
service that is not a validation or a dependency of nature that happened. Just like the
foundation layer, processing services will do the following mapping to occurring
exceptions from its dependencies:
Log
Exception Wrap Inner Exception With Wrap With
Level
StudentDependencyValidationException Any inner exception StudentProcessingDependencyValidationException Error
StudentValidationException Any inner exception StudentProcessingDependencyValidationException Error
StudentDependencyException - StudentProcessingDependencyException Error
StudentServiceException _ StudentProcessingDependencyException Error
Exception _ StudentProcessingServiceException Error
2.3 Orchestration Services (Complex Higher
Order Logic)
2.3.0 Introduction
Orchestration services are the combinators between multiple foundation or processing services
to perform a complex logical operation. Orchestrations main responsibility is to do a multi-
entity logical operation and delegate the dependencies of said operation to downstream
processing or foundation services. Orchestration services main responsibility is to encapsulate
operations that require two or three business entities.
await this.studentProcessingService
.VerifyEnrolledStudentExistsAsync(studentId:
libraryCard.StudentId);
return await
this.libraryCardProcessingService.CreateLibraryCardAsync(libraryCard);
});
The operation of creating a library card for any given student cannot be performed by simply
calling the library card service. That's because the library card service (processing or
foundation) does not have access to all the details about the student. Therefore a combination
logic here needed to be implemented to ensure a proper flow is in place.
Its important to understand that orchestration services are only required if and only if we need
to combine multiple entities operations primitive or higher-order. In some architectures,
orchestration services might not even exist. That's simply because some microservices might
be simply responsible for applying validation logic and persisting and retrieving data from
storage, no more or no less.
Orchestration services are one of the core business logic components in any system. They are
positioned between single entity services (such as processing or foundation) and advanced logic
services such as coordination services, aggregation services or just simply exposers such as
controllers, web components or anything else. Here's a high level overview of where
orchestration services may live:
As shown above, orchestration services have quite a few dependencies and consumers. They
are the core engine of any software. On the right hand side, you can see the dependencies an
orchestration service my have. Since a processing service is optional based on whether a higher-
order business logic is needed or not - orchestration services can combine multiple foundation
services as well.
The existence of an orchestration service warrants the existence of a processing service. But
thats not always the case. There are situations where all orchestration services need to finalize
a business flow is to interact with primitive-level functionality.
From a consumer standpoint however, orchestration service could have several consumers.
These consumers could range from coordination services (orchestrators of orchestrators) or
aggregation services or simply an exposer. Exposers are like controllers, view service, UI
components or simply another foundation or processing service in case of putting messages
back on a queue - which we will discuss further in our Standard.
2.3.2 Characteristics
2.3.2.0 Language
Just like processing services, the language used in orchestration services define the level of
complexity and the capabilities it offers. Usually, orchestration services combine two or more
primitive or higher-order operations from multiple single-entity services to execute a successful
operation.
Orchestration services have a very common charactristic when it comes to the language of it's
functions. Orchestration services are wholistic in most of the language of its function, you will
see functions such as NotifyAllAdmins where the service pulls all users with admin type and
then calls a notification service to notify each and every one of them.
It becomes very obvious that orchestration services offer functionality that inches closer and
closer to a business language than perimitive technical operation. You may see almost an
identical expression in a non-technical business requirement that matches one for one a function
name in an orchestration service. The same pattern continues as one goes to higher and more
advanced categories of services within that realm of business logic.
2.3.2.0.1 Pass-Through
Orchestration services can also be a pass-through for some operations. For instnace, an
orchestration service could allow an AddStudentAsync to be propagated through the service to
unify the source of interactions with the system at the exposers level. In which case,
orchestration services will use the very same terminology a processing or a foundation service
may use to propagate the operation.
Orchestration services mainly combine multiple operations to support a particular entity. So, if
the main entity of support is Student and the rest of the entities are just to support an operation
mainly targetting a Student entity - then the name of the orchestration service would
be StudentOrchestrationService.
This level of enforcement of naming conventions ensures that any orchestration service is
staying focused on a single entity responsibility with respect to multiple other supporting
entities.
For instnace, If creating a library card requires ensuring the student referenced in that library
card must be enrolled in a school. Then an orchestration service will then be named after it's
main entity which is LibraryCard in this case. Our orchestration service name then would
be LibraryCardOrchestrationService.
The opposite is also true. If enrolling a student in a school has an accompanying opeations such
as creating a library card, then in this case a StudentOrchestrationService must be created
with the main purpose to create a Student and then all other related entities once the
aforementioned succeeds.
The same idea applies to all exceptions created in an orchestration service such
as StudentOrchestrationValidationException and StudentOrchestrationDependency
Exception and so on.
2.3.2.1 Dependencies
As we mentioned above, orchestration services might have a bit larger range of types of
dependencies unlike processing and foundation services. This is only due the optionality of
processing services. Therefore, orchestration services may have dependencies that range from
foundation services or processing services and optionally and usually logging or any other
utility brokers.
2.3.2.1.0 Dependency Balance (Florance Pattern)
There's a very important rule that govern the consistency and balance of orchestration services
which is the Florance Pattern. the rule dictates that any orchestration service may not combine
dependencies from difference categories of operation.
What that means, is that an orchestration service cannot have a foundation and a processing
services combined together. The dependencies has to be either all processings or all
foundations. That rule doesn't apply to utility brokers dependencies however.
2.3.2.1.1 Two-Three
The Two-Three rule is a complexity control rule. This rule dictates that an orchestration service
may not have more than three or less than two processing or foundation services to run the
orchestration. This rule, however, doesn't apply to utility brokers. And orchestration service
may have a DateTimeBroker or a LoggingBroker without any issues. But an orchestration
service may not have an entity broker, such as a StorageBroker or a QueueBroker which
feeds directly into the core business layer of any service.
The Two-Three rule may require a layer of normalization to the categorical business function
that is required to be accomplished. Let's talk about the different mechanisms of normalizing
orchestration services.
2.3.2.1.1.0 Full-Normalization
Often times, there are situations where the current architecture of any given orchestration
service ends up with one orchestration service that has three dependencies. And a new entity
processing or foundation service is required to complete an existing process.
For instance, let's say we have a StudentContactOrchestrationService and that service has
dependencies that provide primitive-level functionality for Address, Email and Phone for each
student. Here's a visualization of that state:
Now, a new requirment comes in to add a student SocialMedia to gather more contact
information about how to reach a certain student. We can go into full-normalization mode
simply by finding the common ground that equally splits the contact information entities. For
instance, we can split regular contact information versus digital contact information as
in Address and Phone versus Email and SocialMedia. this way we split four dependencies
into two each for their own orchestration services as follows:
As you can see in the figure above, we modified the
existing StudentContactOrchestrationService into StudentRegularContactOrchestra
tionService, then we removed one of it's dependencies on the EmailService.
2.3.2.1.1.1 Semi-Normalization
Virtual dependencies are very tricky. it's a hidden connection between two services of any
category where one service implicitly assumes that a particular entity will be created and
present. Virtual dependencies are very dangerous and threaten the true autonomy of any service.
Detecting virutal dependencies at early stage in the design and development process could be a
daunting but necessary task to ensure a clean Standardized architecture is in place.
Just like model changes as it may require migrations, and additional logic and validations, a
new requirement for an entirely new additional entity might require a restructuring of an
existing architecture or extending it to a new version, depending in which stage the system is
receiving these new requirements.
It may be very enticing to just add an additional dependency to an existing orchestration service
- but that's where the system starts to diverge from The Standard. And that's when the system
starts off the road of being an unmaintainable legacy system. But more importantly, it's when
the engineers envovled in designing and developing the system are challenged against their
very principles and craftmanship.
2.3.2.1.1.2 No-Normalization
There are scenarios where any level of normalization is a challenge to achieve. While I believe
that everything, everywhere, somehow is connected. Sometimes it might be incomprehensible
for the mind to group multiple services together under one orchestration service.
Because it's quite hard for my mind to come up with an example for multiple entities that have
no connection to each other, as I truly believe it couldn't exist. I'm going to rely on some
fictional entities to visualize a problem. So let's assume there
are AService and BService orchestrated together with an XService. The existence
of XService is important to ensure that both A and B can be created with an ensurance that a
core entity X does exist.
Now, let's say a new service CService is required to be added to the mix to complete the
existing flow. So, now we have four different dependencies under one orchestration service,
and a split is mandatory. Since there's no relationship whatsoever between A, B and C, a No-
Normalization approach becomes the only option to realize a new design as follows:
Each one of the above primitive services will be orchestrated with a core service X then gathered
again under a coordination service. This case above is the worst case scenario, where a
normalization of any size is impossible. Note, that the author of this Standard couldn't come up
with a realistic example unlike any others to show you how rare it is to run into that situation,
so let's a No-Normalization approach be your very last solution if you truly run out of options.
The Software Engineering Manifesto
These are some principles that I’ve lived by and learned working with the best engineers in
the best business around the world, and from building a one client to a billion users systems
from startups to big corporate, here’s my software engineering manifesto:
1. Don’t jump to the first solution you’re familiar with, or the one that just “works” – try to
find a modern way to solve older problems, this way you grow and solve problems at
the same time.
2. Don’t ever think there’s such a thing as perfect software, there’s no such a thing.
There are, however, relatively satisfying experiences that slowly decline as
technology advances and new tools are implemented for better experiences, and
your responsibility as an engineer is to keep your software up to date to provide
better experiences and higher values.
3. Your software should work like an appliance, plug and play not just for the end users,
but also for the engineers that will maintain your software, they should just be able to
pull in your code, build it and run!
4. Write every line of code as if this is the last time you touch that code, because in a
lot of projects, it’s very likely that you won’t get the chance to come back and rewrite
your code.
5. Know when to become a mad scientist and when to become a cowboy, not every
problem requires full optimization.
6. Building software for a client you don’t know or never met is like tailoring a suit for
someone you’ve never seen, it doesn’t work! get to know your clients on a personal
level, understand what they really need their product to be not what you think it
should be.
7. Documentation was meant for describing the purpose of your code and what it does,
undocumented code is like a spaceship without a manual, don’t blame others for not
following your patterns, breaking your code or rewriting the whole system if you don’t
document your code.
8. Take pride and ownership of your product, don’t wait for someone to come and tell
you how to make it better, that should be something your regularly do to keep your
software up to date with the best technologies and practices.
9. Writing software is an opportunity for you to show others how your brain works, don’t
come off lazy or stupid when it’s not who you really are.
10. Software engineering is a lifestyle to take on existing problems in life more
intelligently and efficiently, it’s not a job, a gig or a craft, it’s a lifestyle so if you’ve
taken that path, own up to it.
11. True engineers aren’t just good at integrating distributed systems and building APIs,
true engineers are also good at integrating with like-minded engineers and efficient at
utilizing everyone’s intelligence to build better software.
12. Pair programming is a dance between two highly intelligent individuals, make your
pair feel like they are dancing with a star, because you are!
13. Engineering isn’t just about building software, it’s about intelligently taking on
requirements for new projects, managing time, priorities and building intelligent
relationships with your teammates and even better communications with external
teams.
14. Continuously learn, listen, communicate and integrate, don’t be an occasional
learner, be a consistent day-by-day learner, evolving your skills should be just as
important as your basic needs.
15. Software engineering is an opportunity for humanity to solve its most complex
problems to make a better world for all of us, we can’t get there if we don’t
continuously adapt to changes, be agile and continuously learn, the industry is in a
continuous state of inexperience, push more to teach others, learn from others and
be the best for everyone around you.
16. It doesn’t matter how small or big the project you’re working on is, every project is an
opportunity for you to evolve, learn and modernize, even if you were building a
calculator just for fun!
17. Don’t ask for permission to get something right, just get it right.
18. Don’t try to solve problems with one hand tied behind your back, ask yourself what is
it that you’re not seeing, see the full spectrum, the problem isn’t always in the code
that’s in front of your eyes, it could be the requirement itself or the description of it,
don’t be a robot, visualize the entire process and think of the person using your
system as well as the person that’s going to maintain it.
19. As Reed Hastings says: don’t tolerate brilliant jerks, the cost to the team is much
larger than the profit of the product.
20. As Robert Martin says: “leave the code better than you found it.” And I say leave the
team, product and software and the business better than you found it, that’s just what
engineers do, make things smarter, faster and better!
21. Take second opinions on your work, ask around with an open mind and heart, learn
from others, communicate and be respectful, two minds are better than one, the
more the merrier!
22. The best mind to solve problems is the mind of the person who doesn’t have issues
with anyone, transparent and communicative about how they feel and what they
think, don’t bottle things up, don’t hold grudges, don’t hate or envy, these are all
things that’ll cloud your mind from seeing the best solutions to the hardest problems
and from being a better engineer.
23. As the agile manifesto dictates, people over process, and as others say people over
profits, a better team with better relationships will eventually produce better software
and reflect better practices, work towards engineering better team relationships and
stronger appetite for learning and growing together, don’t try to best the best in your
team, but rather the best for your team.
24. Darwin says: “survival isn’t for the strongest or the smartest, but for those who are
most able to adapt”, in a forever changing industry like ours adaptability should be
your top priority, there’s nothing good about rambling about a technology or a pattern
you used in 1995, it just says you couldn’t adapt to newer, better ways and that
you’re obsolete.
25. Don’t let the technology limitations dictate your vision of how your product should be,
bend the technology to follow your vision not the other way around.
26. Every project is an opportunity for you to:
These principles are going to grow as I grow and build more software and get to learn and
interact with more teams, I encourage you to build your own, seek wisdom in your life and
evolve!
Engineer your way out of all the problems you have, because engineering is what you do for
a living! It’s what you’re good at, stay sharp and keep learning!
With love,
Hassan