Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CODE 3-2022 Web

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Laravel, Cosmos, Azure, Blockchain, PHP

MAY
JUN
2022
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Document
Databases
with Marten

Face Visiting the Managing Data


Recognition World of with PHP
with Python Cosmos DB and Laravel
TABLE OF CONTENTS

Features
8 Blockchain 50 I mplementing Face Recognition
Sahil clarifies a few points about blockchain, bitcoin, and the future.
Sahil Malik
Using Deep Learning and
Support Vector Machines
14  reate Maintainable Minimal
C If you have a fancy new computer or phone, you might already be using
facial recognition. Wei-Meng explains how this exciting technology is
Web APIs at once simpler than you think and crazy complicated—and super cool!
Wei-Meng Lee
Paul shows you the benefits of a Router class and minimizing your
Web API calls.
Paul D. Sheriff 66  istributed Caching in ASP.NET 6
D
22 Change-Tracking Mixed-State Using Redis in Azure
Whether you realize it or not, you’ve been benefiting from caching
Graphs in EF Core for years. Joydip tells you how and why to use caching to your best
advantage.
Julie says that integration testing is the key to tracking changes in EF
Joydip Kanjilal
Core. Learn what you need to know to get up and running with it.
Julie Lerman

F ast Application Persistence


Columns
26
with Marten
You don’t want to wait—and you don’t want your users to wait—while
your application completely rebuilds itself after a simple query. 74 CODA: It was 30 Years Ago
Jeremy shows you how Marten can store whole documents as a single
element in your database and deliver the whole thing in one quick gesture.
Jeremy D. Miller
This May…
John remembers the good old days of FoxPro, and revisits some
of the lessons learned from the Fox community.
34  sing Cosmos DB in .NET Core
U John V. Petersen

Projects
Cosmos is a document storage tool for database developers.

Departments
You’ll want to consider what Shawn reveals about this spiffy tool
before you build your next app.
Shawn Wildermuth

40  uilding MVC Applications in PHP


B 6 Editorial
Laravel: Part 1
Bilal is determined to make you a better coder. This time, he embarks on a
45 Advertisers Index
deep dive into the M of Modal View Controller applications in PHP Laravel.
Bilal Haidar 73 Code Compilers

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to subscriptions@codemag.com or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Artistic Collaboration
In my last editorial “The Computer is my Paintbrush,” I talked about the thrill I still get building applications
after 30+ years in this business. There are two parts of this process that bring me joy. The first part is the
process of creation. Taking an idea from thought to code to a working application is simply amazing

to me. The second thrilling part of this process work with lots of other developers. It was also really is. No good movie gets made without col-
is observing the use of these applications. Some- during this time that I got the conference speak- laboration. and I feel the same with software de-
times I’m the user, and in many more cases, oth- er bug. It was at a conference that I met another velopment.
ers receive the benefit from my applications. long-term collaborator and now best friend,
John Petersen. John and I met after I gave a ses- Collaboration can deliver real results and some-
That editorial was written from a singular point sion and we hit it off right away. times the best collaborations come from just a
of view: MY point of view. What I mean by this is single sentence or comment. For nearly two de-
that I discussed writing applications as a SOLO Soon after that conference, I started writing a cades, I served in the role of lead architect/en-
developer. What I failed to mention is that there’s Visual FoxPro book and I was trying to assemble gineer for a midwest credit card company. During
another style of work that can bring joy, and that a team of writers as I didn’t want to write a whole this time, I provided many of the tools we built
style of work involves working with others to build book by myself. I KNEW that collaboration would our applications with, including frameworks, li-
great things. What’s this other style of work? Col- be the only way to succeed in writing a book. braries, tools and documentation. I recall one
laboration! Let’s talk about collaboration. It didn’t take long before I pitched this idea to day when Dan Zerfas the VP of development (and
John to come on board to write a chapter or two. still my friend), suggested that I wrap all these
When I started my career, I was a “Lone Wolf” Long story short, John became a full-fledged disparate tools into a common shell. When he
coder. I was generally the solo programmer on contributor to the book and we’ve been working spoke those words, I recognized a flash of bril-
staff and responsible for everything code related. together in one capacity or another ever since. liance. I went to work building a tool called DPSI
It didn’t take me too long to realize that this was Shell. DPSI is the acronym of my company name.
a limiting factor in my progression as a devel- I should also mention that during my book writ- This tool is still in use today—over 15 years. All
oper. I soon began to seek out other developers. ing phase, I met Melanie Spiller (my kick-butt this from one statement from a valued collabora-
This was the era of CompuServe, so I took to some editor on this magazine) via another publisher. tor. The power of collaboration has never been
of the developer forums there to see how other That collaboration didn’t work out at the time (it more evident when I recall this story.
developers worked. This definitely scratched an was me, not Melanie, for the record), but it has
itch but was not totally fulfilling. definitely worked well over the last 10 years for If I can leave you with one thought: Keep your
sure. So sometimes collaboration takes time to ears, heart and mind open to collaboration. You
It wasn’t until I got a job working for a company work. never know where good ideas may come from.
called The Juiceman that I learned just how valu-
able collaboration was. When I started at The Finally, I must mention one of the most intimate
Juiceman, I collaborated with some of the best collaborators I’ve worked with for the last 15+  Rod Paddock
developers I’d ever crossed paths with. Two of years: Greg Lawrence. Greg and I became friends 
them, Eric Ranft and Mark Davis, are still friends when he worked at an ISP doing network stuff
to this day and have gone on to do great things and a bit of HTML/JavaScript. It was this work
(check out John Petersen’s column to see what that gave me the inkling that Greg might make
Eric went on to do). a good developer. I took a chance on him and
hired him as my first employee. My inkling was
After my time at The Juiceman, I moved to a com- correct, and Greg did become a great developer,
pany called Pinnacle Publishing. It was at Pin- with whom I’ve been lucky enough to work on
nacle where I established a friendship with Erik some killer applications.
Ruthruff. Erik and I have collaborated on numer-
ous projects for nearly three decades now. These I want to point out one thing that is valuable in
projects include courseware development, build- this particular collaboration: the learning. Not
ing tools for managing conferences, and working Greg’s learning but mine. Over many years, Greg
on this magazine. I’m forever grateful for our col- has taught me a lot when it comes to teaching
laboration and friendship. It’s amazing how much development skills, as well as how to build soft-
a single person can affect your life. ware. His skills as a developer have helped make
my skills better. The apprentice is now the mas-
Soon after leaving Pinnacle, I returned to my life ter, as they say in the Star Wars universe.
as a Lone Wolf, this time as a consultant. This is a
bit of a misnomer. There isn’t really such a thing As many of you already know, I am a huge fan of
as a Lone Wolf consultant because consultants, the movie business and in particular, the process
by definition, work with others. During my years of movie making. The thing that I have learned
as consultant, I had the opportunity to meet and from that industry is just how collaborative it

6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/code
832-717-4445 ext. 9 • info@codemag.com

codemag.com
ONLINE QUICK ID 2205021

Blockchain
When I took up IT as a career, the most appealing thing to me was that I would never get bored in this field. I mean, which other
field has fun names, like “bugs” for defects? And every couple of years, everything changes. After many years in this field, this
still holds true. I keep seeing groundbreaking changes that promise to change our society. Think about the change the Internet

has had on our lives. Think about the change storage shut it down or tamper with it. Bitcoin chooses to use block-
capacity has had on our lives. Now that so much can be chain to safeguard its decentralized currency, but there are
recorded and searched, computers know more about us many other uses of blockchain, and many uses that we’re
than we know about us. Scary. Think about the amount of still figuring out.
change that phone in your pocket has brought forth. Think
about how different the COVID crisis would have been if we What would blockchain mean for proving your identity?
couldn’t work from home, or didn’t have information at our This’s exactly the spirit behind decentralized identities in
fingertips? Or how wars are fought and broadcast on social the Microsoft Identity Platform. Imagine the ramifications
media. where you, the consumer, hold your identity and choose to
Sahil Malik share it with whomever you desire to.
www.winsmarts.com One such technology that has flown under the radar for
@sahilmalik many years, and has become very important recently, is What would it mean for social media? Information could be
blockchain. I thought we should chat about it. shared in a P2P form, and, for better or worse, would be ef-
Sahil Malik is a Microsoft fectively uncensorable.
MVP, INETA speaker,
a .NET author, consultant, What Is Blockchain This sure makes a lot of governments and people in posi-
and trainer.
Believe it or not, this concept, although all the rage today, tions of power quite nervous. Like I said, just as the In-
Sahil loves interacting with was first discussed all the way back in 1982. In a disserta- ternet has had a profound impact on our society, so will
fellow geeks in real time. tion entitled “Computer Systems Established, Maintained, blockchain.
His talks and trainings are and Trusted by Mutually Suspicious Groups”, David Chaum
full of humor and practical first proposed the idea of a blockchain-like protocol. Why the name “blockchain?” A typical database has data
nuggets. in rows and columns. In-memory databases can store ob-
The easiest way to think of blockchain is as a distributed jects, similar to JSON. But blockchain collects information
His areas of expertise are database that’s shared among the nodes of a computer net- together in groups called blocks. These blocks have a de-
cross-platform Mobile app work. Like any database, it stores information in a digital fined storage capacity, and when filled, are closed, given a
development, Microsoft format. The only difference is that this database doesn’t timestamp, and are linked to a previously filled block. This
anything, and security reside on one computer, or even a computer in Azure or creates a chain of blocks, and therefore we call it a block-
and identity. chain. This is illustrated in Figure 1.
AWS; this database is distributed among all the nodes of the
computer network that participate in that blockchain. These
computers can be your laptop, a server somewhere—really
anything.
An alternate name for blockchain is
You’ve probably heard of Bitcoin. Bitcoin is an example of distributed ledger technology or DLT.
a decentralized digital currency. And the Bitcoin system is
a collection of nodes that run Bitcoin code and that stores
data on its blockchain. If you and I exchange some Bitcoin,
this transaction gets written to this distributed database. This also contributes to the irreversible nature of blockchain
And because it’s decentralized, there’s no single party con- data. These chains are replicated across various nodes, and
trolling it. This means, for all practical purposes, that given changing the history of all these nodes is practically impos-
its size, it cannot be censored, tampered with, or taken over sible.
by a single party. This is quite in contrast with, say, a da-
tabase sitting in the cloud. Some hacker could tamper with Think of blockchain as an immutable ledger that cannot be
that database or a central authority could censor it. Or some edited, deleted, or destroyed. An alternate name for block-
disgruntled employee or party could take control of it and chain is distributed ledger technology or DLT.

Figure 1: How blockchain stores data

8 Blockchain codemag.com
How Does Blockchain Work? that can be trusted is a great application for it. Here are
Blockchain is designed for digital information to be record- some examples of blockchain being used today.
ed and distributed, but not edited. When a new transaction
is entered on the blockchain, the transaction is transmit- Cryptocurrencies
ted to a peer-to-peer network of computers. This network of Imagine that you’re in a group of four friends and you fre-
computers then solves equations to confirm validity of the quently exchange money. Maybe you go out for dinner and
transaction. When the transaction is confirmed as a valid one of you foots the bill, for instance. The other three owe
transaction, it’s entered into blocks. Each block contains that person some money. How does this work?
data, hash of the block, and hash of the previous block it’s
linked to. As blocks fill, they’re chained together, creating a Well, you maintain a record, or a ledger of who owes who how
long history of transactions that are permanent. much. Then, there is mutual trust. You have trust, and records,
and this is how you can continue to loan each other money.
The details of the data depend on the kind of blockchain. For
instance, a payment network, such as Bitcoin, could store the You already know that blockchain can give you an untamper-
sender, receiver, and the amount of the transaction. able ledger database. But really, anyone can write anything
to it. What prevents me from writing, randomly, that Alice
The hash acts as a fingerprint for your block. It verifies the owes you money? This is where trust comes in. For that, you
uniqueness and validity of the block. Changing the block add cryptography to it and you get trust. This is the founda-
causes the hash to change, which means that changing the tion of crypto currencies. Anyone can write to the ledger
block data effectively removes it from the chain, making around who owes who how much, but it isn’t until the data
spurious data easy to identify. is signed with a private key that the transaction is trustable.
And the public key allows anyone to verify that the transac-
Similarly, the hash of the previous block creates a chain of tion is legit. The signature, in fact, requires your private
blocks. This means that, like a journaling database, or a led- key, the entry ID, and the contents of the message, so the
ger system, you can’t simply edit one link in the chain and signature is distinct for each transaction.
expect the chain to not detect the problem.
There are many cryptocurrencies, such as Bitcoin, Ethere-
The first block in a blockchain doesn’t point to any previous um, Ripple, Litecoin, and more. In fact, there are more than
block and it’s called the genesis block. 1600 crypto currencies, each with their unique characteris-
tics. They’re all implementations of blockchain.
Now, you may be thinking that new computers, such as that
fancy M1 Ultra Mac, are pretty fast. And Bitcoin is about On top of that, this virtual currency is tied to a real currency to
324GB of data. So why can’t I just completely recalculate the start with. So, let’s say, I buy $100 worth of SahilCoin. I know
entire blockchain’s hashes and make my invalid blockchain this sounds funny, but I could set up a new coin by that name
valid? You could. But there’s an additional layer of protec- and we could start using it between friends <wink wink>. Now,
tion called “proof of work.” This effectively slows down the let’s say that I can’t transact > $100 worth of SahilCoin. This
creation of a new block. For instance, in Bitcoin, every block means that I can’t keep generating signed transactions on the
takes 10 minutes to calculate proof of work for each block ledger once I exceed my allotted value of $100. In order to
and add it to your blockchain. This means that editing the transact money on this SahilCoin network, I can continue to
entire bitcoin blockchain is impractical. To put numbers to settle transactions in SahilCoin. I don’t need banks or wire
it, Bitcoin block size is around 1MB and Bitcoin blockchain transfer fees or intermediaries. Even so, the money has real
is 324GB, which translates to more than six years to tamper value. In fact, currencies such as Bitcoin can be converted
with the entire Bitcoin blockchain in its current size. back and forth to real cash, with the convenience of an ATM.
Yes, there are physical ATMs where you can buy and sell Bitcoin
Yet another mechanism to keep things secure is the fact for cash. Of course, you can do so virtually as well.
that a blockchain is distributed. This means that anyone
who joins the blockchain gets a full copy of the entire block- This is where things get interesting and what differenti-
chain. So, if someone creates a new block, that block is sent ates one coin from another. When an entry is made to the
to everyone on the network, and everyone verifies it as a blockchain, theoretically speaking, everyone should be able
legit block belonging to the existing blockchain. This is then to verify and agree to that entry. But in reality, there are
added to the existing blockchain on everyone’s network, transmission delays on a P2P network. Different crypto cur-
thereby creating a consensus among everyone. rencies use different protocols to balance between speed
and reliability. Additionally, crypto currencies, such as
To tamper with a blockchain, you’d have to recalculate the Ethereum, allow the deployment of smart contracts and
entire blockchain, get over the proof of work hurdle, and decentralized applications, which are bits of code that can
tamper with this blockchain on at least 50% of the P2P net- execute and release cryptocurrency when certain conditions
work. As you can imagine, this is nearly impossible to do are met. Both Bitcoin and Ethereum currently use proof of
on a mature network such as Bitcoin. Again, to give it some work as consensus protocol, but Ethereum will soon move to
numbers, the Bitcoin network has close to 15000 nodes, but proof of stake, which will allow it to be much more scalable,
you can see an up-to-date number at https://bitnodes.io/. secure, and much more energy efficient.

In fact, this is one of the major criticisms of Bitcoin. When


Applications for Blockchain any transaction is done on Bitcoin, and it spreads through
At its heart, blockchain is a ledger-distributed database, the P2P network and is verified and recorded, all those com-
designed to record, distribute, but not tamper. This means puters are consuming lots and lots of energy. It’s estimated
that any application needing wide dissemination of data that Bitcoin today consumes 131TWh of energy annually,

codemag.com Blockchain 9
enough to power Ukraine and Egypt. Much of this energy So as long as you can back your smart contract with ETH
is created from non-renewable sources, which produces a and gas money, and write the logic in a simple language
terrible amount of greenhouse gases. On top of that, the and stick it on the Ethereum blockchain, you’re in business.
massive computation requirements mean that you need to
have powerful hardware, which gets outdated very quickly, Just read that sentence again. Now you see why I picked this
so it produces tons of e-waste. In fact, Tesla started allow- field for work?
ing customers to buy cars using Bitcoin, and they reversed
their stance due to environmental concerns. NFTs
For such a long time, we’ve been used to physical objects
Even though the name says “coin” there is no need for a that are hard to replicate. There is the Mona Lisa, for in-
physical coin involved here. It’s not backed or issued by cen- stance. I’m sure copies exist, but there’s only one original
tral banks or governments, it’s backed by blockchain, and and there’s a way to protect it and detect counterfeits. This
an equivalent cash transaction value that fluctuates based is so much harder in the digital world.
on demand. The currency amount on Bitcoin tokens is kept
using public and private keys. The public key is sort of your Being digital creates so many issues. It serves as a disin-
bank account number, and can serve as an address that you centive to produce original work, because people just copy
use to send or receive Bitcoin. The private key is something it. Just think of all the memes floating around. There’s no
you must never share. It’s what you use to authorize Bit- payment mechanism to the original creator as their cre-
coin transmissions. A Bitcoin wallet, on the other hand, is ation goes viral. And most of all, the recipient has a hard
a secure digital store of information that keeps track of the time trusting what they see and if they can trust it to be
ownership of your coins. original. For instance, a doctored video of the president
saying silly stuff on social media could have serious ramifi-
Smart Contracts cations. Wouldn’t it be nice if the White House could some-
Smart contracts are tiny computer programs that live on a how stamp something as original and untampered with,
blockchain and can disburse a payment once the conditions and, just like you verify a site using SSL, you could verify
of the contract are met. For instance, if there’s a crowd- the originality of content? This is exactly the problem NFTs
funding effort involved, money can be collected from many solve.
people and be disbursed when minimums are met. Or you
can have a collective auction. Or perhaps an inheritance. NFTs stand for non-fungible tokens. In our digital world,
Think of it as a digital vending machine. people copy everything. All it takes is a bunch of keystrokes,
and your digital artwork is now mine with a screenshot. An
Smart contracts on the Ethereum blockchain require you to NFT is simply any binary data put on the blockchain. Given
write the contract in “smart contract language,” which has the characteristics of a blockchain—the fact that it’s immu-
its own very simple syntax, and you need to have enough table and independently verifiable—now, you can prove the
ETH to deploy your contract. Then you have to pay “Gas,” originality of any work. This has pretty significant ramifica-
which is a unit of computational effort to execute opera- tions for ownership rights.
tions on the Ethereum network, to deploy your smart con-
tract. You pay these gas fees also in ETH. NFTs don’t have to be just images. They can be anything.
They can be, for instance, an audio file, a video, or any other
kind of digital artwork. They can be domain names, concert
tickets, objects in the metaverse, really, anything that is
digital or can have a digital representation. Do you remem-
ber a game called FarmVille, where you could buy/sell stuff?
McDonalds just bought some real estate in the metaverse.
Strange times we live in.

McDonald’s just bought real


estate in FarmVille.

You can even buy real-estate—yes, a house you can live in—
using NFTs. Remember, you’re proving ownership. What’s
your house’s current ownership? Who has the deed? Well,
how is that ownership proof not digital? Of course, there are
still issues to be worked out, such as governments recogniz-
ing Bitcoin as official currency or NFTs as a valid equivalent
of a deed. But you can link Bitcoin to dollars and NFTs to
deeds.

Really, anything can be sold as an NFT. Jack Dorsey, the


founder of Twitter, sold his first tweet as an NFT for 2.9 mil-
lion USD (https://v.cent.co/tweet/20). You can see this in
Figure 2: The $2.9 million tweet Figure2. Honestly, I’d be willing to sell mine for a lot less.

10 Blockchain codemag.com
This NFT’s ownership is now sinaEstavi’s and sinaEstavi can,
in the future, choose to sell it to someone else. The transac-
tion to sell the NFT itself can be stored on the blockchain, ®
which proves ownership transfer, again, in a verifiable, non-
tamperable manner. Of course, if sinaEstavi can’t find any
buyers, this NFT becomes worthless, much like objects in the
real world. After all, what’s so special about the Mona Lisa?

You may be thinking that typical blocks in blockchain are Instantly Search
Terabytes
small. For instance, bitcoin block size is just 1MB. In a world
where your phone takes 108MP images, how do you effi-
ciently store them on blockchain? The answer is that the en-
try on the blockchain contains a unique fingerprint (hash),
token name, and symbol, a pointer to the previous block,
and a link to the file on IPFS. IPFS stands for interplanetary
file system. And it looks like ipfs://somelongweirdstring.
Usually when you try to get a file, it looks like https://loca-
tion. That’s called location-based addressing.
dtSearch’s document filters support:
Here in IPFS, you’re using ipfs://hash and providing a hash • popular file types
of the content you’re interested in. This is content based
addressing. This link points you to both the file and the • emails with multilevel attachments
metadata of the file at ipfs://somelongweirdstring/meta-
data.json. There are python and node packages that let you
• a wide variety of databases
decipher this metadata easily, although you can also just • web data
visit https://ipfs.io. When you request an IPFS file, anyone
on the blockchain network who has a copy of that file will
return the file to you, and given that it’s built on block-
chain, you can use the hash to ensure that the file hasn’t
been tampered with.
Over 25 search options including:
• efficient multithreaded search
Also, the metadata can point to a location that is a simple
HTTPS URL or any other mechanism that stores data off the • easy multicolor hit-highlighting
blockchain. This is called off-chain storage, which allows • forensics options like credit card search
you to have a hybrid of blockchain and old-style technolo-
gies and gain the best of both worlds.

One interesting thing about IPFS is that the actual file


doesn’t need to be stored by everyone. In fact, the creators Developers:
of IPFS have created a blockchain called Filecoin, and if
you have extra space on your hard disk, and some Internet
• SDKs for Windows, Linux, macOS
bandwidth to share, you can become an IPFS node and start • Cross-platform APIs cover C++, Java
serving files to the world, and yes, earn Filecoin in return. and recent .NET (through .NET 6)
Filecoin can then be traded for other crypto currencies, or
to fiat currencies, which you can then use at your local fast- • FAQs on faceted search, granular data
food restaurant. Isn’t this amazing? You setup an IPFS node classification, Azure, AWS and more
and start making passive income. You can track the price
of Filecoin here https://www.coindesk.com/price/filecoin/.

This is perhaps why it’s also called the “interplanetary” file


system. Imagine if humans moved to Mars and wanted to
load https://somesite.com. A roundtrip to earth would take
forever. But the Martian humans could effectively replicate Visit dtSearch.com for
the file on the IPFS locally. Finally, we can surf for memes • hundreds of reviews and case studies
on Mars.
• fully-functional enterprise and
NFTs only represent ownership. Think of it as digital brag- developer evaluations
ging rights. But anyone can download the NFT and even
copy it. Except you can always differentiate the original
from the copy. NFTs, therefore, also allow concepts such as The Smart Choice for Text
the buyer owning the original, but the creator owning the
copyright or reproduction rights.
Retrieval® since 1991
In a sense, NFTs are nothing but just a smart contract. The dtSearch.com 1-800-IT-FINDS
contract stores unique properties of the item and keeps
track of current and previous owners. You can program this

codemag.com Blockchain 11
contract to pay the owner if it changes hands. Or for that is at war. Carrying money as crypto is memorizing a phrase.
matter, pay all previous owners, or just the creator, every Or for that matter information. For instance, in 2017, the
time it changes hands, much like a royalty. Turkish government blocked access to Wikipedia as anti-
government. People just put Turkish Wikipedia on IPFS,
Here you are, reading this article. Imagine if this article and good luck blocking that. Another example is https://d.
were an NFT. And every reader who read it sends a royalty of tube/, which is just like YouTube, but built on blockchain.
25 satoshi (the smallest unit of Bitcoin) back to the author. This means no ads, no censorship, and you can’t delete
One bitcoin has 100 million satoshis, and 25 satoshi roughly videos.
translates to one cent. This transaction could be done via
a smart contract, with no middlemen, transaction fees, or Decentralized Identities
even considerable delays. But this kind of microtransaction Identity is an interesting problem. Many years ago, it was a
has historically been nearly impossible to replicate using username/password. We then started coming up with better
the conventional banking system. Can I write a blogpost mechanisms for proving that you had the right password.
where very reader pays me 0.0001 cent? Not in the tradi- Over time, we realized that it was not only hard to keep
tional way. With NFTs or crypto this is possible. And you do this secure, but it was also inflexible. So we created stan-
see examples of crypto tip jars already on so many sites. dards like OpenID Connect, where we delegate or federate
What prevents us from taking the next step and unlocking the process of proving an identity to a well-known identity
smart contracts on content? I am sure it will happen, if it provider, like Twitter, Facebook, Google, Microsoft, etc.
isn’t already.
It’s still not ideal. Why should Facebook or Google or Micro-
I have no misgivings about the capability and risks of this soft hold all my data? Remember, an identity is tied to your
technology. This transaction could cross borders, avoid the profile. Let’s say you go to a doctor and have to prove who
banking system altogether, even circumvent governments. you are. Today, you show a driver’s license or similar form
It makes a lot of people in power very nervous. Sure, this of identification. I’ve always feared what would happen if I
has positives and negatives, and for sure, like any technol- lost my identification when travelling and TSA wouldn’t let
ogy, has potential for misuse or great use. me on the plane. Could I log into my Azure AD account to
prove who I am? It’s certainly enough for my employer, so
This technology, I am sure, is being used to transport il- why isn’t it enough for TSA or my doctor?
licit money, and governments are trying hard to catch up to
it, but there are some really good uses for it too. Imagine Well, it may be enough, but that requires both of you trust-
someone trying to flee their country because their country ing Microsoft to hold your profile, and the government and
its regulations allowing Microsoft to hold every citizen’s
personal information.
Listing 1: Our starter page with web3.js
<!doctype html> Yeah, not gonna happen.

<html lang="en"> This is where decentralized identities come in. Put simply,
<head> they are identity information that you control, backed by
<meta charset="utf-8">
<title>Ethereum demo</title> an attestation authority. And you control what bit of infor-
<script mation you share with whom. For instance, when visiting a
src="https://cdn.jsdelivr.net/npm/ doctor, they don’t need to know your salary. Or while going
web3@latest/dist/web3.min.js"> through TSA, they don’t need to know about that wart that
</script>
</head> itches weird. This is where you can hold your distributed
<body> identity in a wallet, and you share what you deem worth
</body> sharing, yet no central authority holds all your profile in-
</html> formation. You do.

Other uses
Listing 2: Logic to process ethereum blocks The blockchain technology is nascent, but a number of fu-
async function checkCurrentBlock() { ture uses and implementations will occur. How about voting?
const currentBlockNumber = What’s more important that being able to trust our voting
await web3.eth.getBlockNumber(); system? How about notaries? How about tracking goods and
console.log( shippable items that aren’t tied to a single vendor’s tracking
"Current blockchain top: " + currentBlockNumber,
" latest Block Number: " + latestBlockNumber)
system? Perhaps medical records that you, the patient, con-
while (latestBlockNumber == -1 || trol and share as and when necessary in whatever manner
currentBlockNumber > latestBlockNumber) { necessary with your doctor? Vaccination records, taxation
await processBlock( records, so much more. Like many other technologies, the
latestBlockNumber == -1 ? technology runs far ahead and then the regulators catch up.
currentBlockNumber : latestBlockNumber + 1);
} We’re entering an interesting time when governments are
setTimeout(checkCurrentBlock, timeInterval); launching cryptocurrency equivalents of their fiat curren-
} cies. What does mean for the traditional banking system?
async function processBlock(blockNumber) {
console.log("Processing block: " + blockNumber)
latestBlockNumber = blockNumber;
Ethereum Blockchain in JavaScript
} Sometimes it’s important to understand the mechanics and
reasoning behind any new technology before we roll up our

12 Blockchain codemag.com
sleeves and see it in action. The good news is that, like al-
most anything else, a lot of the hard work has been done for
you in reusable libraries.

Let’s see a quick example of listening to transactions hap-


pening on the Ethereum blockchain. I’m going to do this
in JavaScript, although equivalent libraries exist for other
languages as well.

For JavaScript, there exists a well published JavaScript API


called web3.js at https://web3js.readthedocs.io/. This API
can be used in both HTML or NodeJS. I’ll use it in HTML. Go
ahead and set up a simple HTML page, and reference this
script. This can be seen in Listing 1.

In order to use web.js, the first thing you need to do is


initialize an instance of Web3. Ethereum libraries communi-
cate through RPC calls. You could host one on your computer
if you were running a NodeJS process on localhost at some
port, or you could use a free, hosted version on cloudflare.
Because I’m running this on an HTML page, I’ll use cloud-
flare.

const web3 = new Web3("https://cloudflare-eth.com");

I want to initialize two variables: one that holds the latest


BlockNumber I receive from Ethereum, and the other that Figure 3: The latest blocks on Ethereum
acts as a time interval where I listen for new blocks mined
on the blockchain every five seconds.
shown to me is because I trusted that a glossy photograph SPONSORED SIDEBAR:
let latestBlockNumber = -1; I was holding in my hand could not be faked. At least, us-
let timeInterval = 5000; ing the technology around me at that time, it was not pos- Need FREE Project
sible to create a real-world replication on a photograph so Advice? CODE Can
create a basic definition of a function that will get called ev- perfect, on glossy paper, that I could hold in my hand. I Help!
ery five seconds, and it will receive a block from Ethereum. trusted that information, even though I didn’t believe it. I
was five years old, and you must keep in mind that India was Get no strings, free
advice on new or existing
async function checkCurrentBlock() { quite socialist back then. Information was very controlled,
software development
const currentBlockNumber = forget the Internet—we had no phones, and everything on
projects. CODE Consulting
await web3.eth.getBlockNumber(); TV and radio was extremely censored and controlled. Those
experts have experience
setTimeout(checkCurrentBlock, timeInterval); memories were etched in my mind even without reinforce- in cloud, Web, desktop,
} ment for many years. Sometimes, rarely, I’d catch a glimpse mobile, microservices,
of it on TV. But it wasn’t until many years later I got to see containers, and DevOps
Let’s finish up this logic by writing up some processing log- all this and touch it, first hand. Honestly, I still have a hard projects. Schedule your free
ic. I’m simply writing out the block I receive from Ethereum. time believing a lot of it. Maybe this is a dream and I’m in a hour of CODE call with our
This can be seen in Listing 2. metaverse. Who knows? expert consultants today.
For more information,
Go ahead and run this HTML page by double clicking on it. Governments are trying hard to nail down information, as visit www.codemag.com/
If you’ve loaded this up in a reasonably modern browser, it are corporations. But information wants to be free. The consulting or email us at
should show you a blank page. Open developer tools, and more you control it, the more people seek it. And it isn’t just info@codemag.com.
you should see Ethereum blocks scrolling. This can be seen about information. Can you imagine surviving today without
in Figure 3. the Internet? As romantic as being “off the grid” sounds,
how many of us actually do it, or even could do it?
Summary With incredible connectivity, ample storage, and cheap
We’re gradually but surely entering a new phase of comput- computing power, this revolution is just starting. I can’t
ing. Web 3.0 technologies such as blockchain are going to wait to see how it will change our world, in some ways bet-
have a profound impact on everything we do. Think how life ter, in some ways worse.
was before the Internet. We had cold-war USSR, complete
with an iron curtain, and a whole country with millions of Like any tool—a hammer, a car, or nuclear energy—master-
people living in an alternate reality. Then came the Inter- ing it and using it willfully will differentiate us from riding
net, information was free, and people saw things that were the car, or the car riding us.
hard to believe but they were real.
Where will you be on this drive? I hope in the driver’s seat.
I remember, as a kid, when I first saw photographs of the
USA. I had a very hard time believing a place like that exist-  Sahil Malik
ed on this planet. The only reason I trusted that information 

codemag.com Blockchain 13
ONLINE QUICK ID 2205031

Create Maintainable Minimal Web APIs


It’s very easy to get started using Minimal Web APIs in .NET 6, but as the number of routes grows, your Program.cs file can easily
become overwhelming to maintain. Instead of keeping all your app.Map*() methods in the Program.cs file, you should create
a Router class to separate your groups of app.Map() methods into. For example, if you have a set of CRUD routes for working

with products and another set for working with customers, Wildermuth’s article entitled “Minimal APIs in .NET 6” in
create a ProductRouter class and a CustomerRouter class. In the last issue of CODE Magazine at https://codemag.com/
this article, you’re going to see how to move each group of Article/2201081/Minimal-APIs-in-.NET-6.
Web APIs into their own router class to provide a much more
consistent and maintainable way to create Minimal Web API This is a lot of code in the Program.cs file for just working
calls. with products. You can imagine how that code grows as you
add the same CRUD logic for customers, employees, books,
or whatever other tables you have in your database. Let’s
A Sample Minimal Web API now look at how to make this code more manageable.
Paul D. Sheriff Let’s look at a simple Minimal Web API system that works
with product data. You normally have a Product class with
www.pdsa.com
basic properties such as ProductID, Name, Color, and List- Create a Web API Project
Paul has been in the IT Price, as shown in the following code snippet. To get the most out of this article, I suggest that you fol-
industry over 35 years. In low along with the steps as I outline them. You need to in-
that time, he has success- public partial class Product { stall .NET 6 on your computer, which you can get at https://
fully assisted hundreds
public int ProductID { get; set; } dotnet.microsoft.com/en-us/download/dotnet/6.0. You
of company’s architect
public string Name { get; set; } also need either VS Code (https://code.visualstudio.com/
software applications to
public string Color { get; set; } download) or Visual Studio (https://visualstudio.microsoft.
solve their toughest busi-
ness problems. Paul has public decimal ListPrice { get; set; } com/downloads). If you wish to use VS Code for your editor
been a teacher and mentor } and your application creation, use the following section for
through various mediums guidance. If you wish to use Visual Studio 2022, skip to the
such as video courses, In the Program.cs file, you write an app.MapGet() method next section for guidance.
blogs, articles and speaking to return a set of Product objects. In this example, I’m using
engagements at user groups hard-coded Product objects, whereas in a real application, Using VS Code
and conferences around you’d most likely use the Entity Framework to retrieve these Open VS Code in the top-level folder where you normally
the world. Paul has many from a database table. create your projects (for example D:\MyVSProjects). Select
courses in the www.plural- Terminal > New Terminal from the menu. In the terminal
sight.com library (http:// // Get a collection of data window in VS Code, create a .NET 6 Web API app using the
www.pluralsight.com/au- app.MapGet("/product", () => { following dotnet command:
thor/paul-sheriff) on topics return Results.Ok(new List<Product> {
ranging from .NET 6, LINQ, new Product { dotnet new webapi -minimal -n AdvWorksAPI
JavaScript, Angular, MVC, ProductID = 706,
WPF, ADO.NET, jQuery, and Name = "HL Road Frame - Red, 58", Select File > Open Folder… from the menu, navigate to the
Bootstrap. Contact Paul at Color = "Red", ListPrice = 1500.00m new AdvWorksAPI folder created by the above command
psheriff@pdsa.com. }, and click the Select Folder button.
new Product {
ProductID = 707, Add Required Assets
Name = "Sport-100 Helmet, Red", At the bottom right-hand corner of Visual Studio Code, you
Color = "Red", ListPrice = 34.99m should see a warning bar appear (Figure 1). This tells you
} that you need to add some required assets. Click the Yes
}); button.
});
This warning box can take a minute to appear; either be pa-
You then have the rest of your app.Map*() methods that tient and wait for it, or you can run a build task by selecting
retrieve a single product, post a new product, update an the Terminal > Run Build Task… > build from the menu bar.
existing product and delete a product, as shown in List-
ing 1. For a great primer on Minimal APIs, check out Shawn Save the Workspace
Click File > Save Workspace As… and give it the name
AdvWorksAPI. Click the Save button to store this new
workspace file on disk.

Using Visual Studio 2022


If you prefer to use Visual Studio 2022 for your development,
start an instance of Visual Studio 2022 and select Create a
new project from the first screen. Select the ASP.NET Core
Web API project template and click the Next button. Set the
Figure 1: Be sure to answer Yes when prompted to add required assets. Project name field to AdvWorksAPI and the Location field to

14 Create Maintainable Minimal Web APIs codemag.com


Listing 1: A very basic set of Minimal Web APIs
// GET a single row of data current.Color = entity.Color;
app.MapGet("/product/{id:int}", (int id) => current.ListPrice = entity.ListPrice;
{
// Simulate returning data from the // TODO: Update data store
// data store with the specified ID
return Results.Ok(new Product { // Return the updated entity
ProductID = id, ret = Results.Ok(current);
Name = "New Bicycle", }
Color = "Red", else {
ListPrice = 1500.00m ret = Results.NotFound();
}); }
});
return ret;
// INSERT new data });
app.MapPost("/product", (Product prod) =>
{ // DELETE a single row
// TODO: Insert into data store app.MapDelete("/product/{id:int}",
(int id) =>
// Return the new object {
return Results.Created( IResult ret;
$"/product/{prod.ProductID}", prod);
}); // TODO: Look up data by the id
Product entity = Get(id);
// UPDATE existing data if (entity != null) {
app.MapPut("/product/{id:int}", (int id, // TODO: Delete data from the data store
Product entity) =>
{ // Return NoContent
IResult ret; ret = Results.NoContent();
}
// TODO: Look up data by the id else {
Product current = Get(id); ret = Results.NotFound();
}
if (current!= null) {
// Update the entity with new data return ret;
current.Name = entity.Name; });

the folder where you generally create your projects and click
the Next button. From the Framework dropdown list (Figure
2) choose .NET 6.0 (Long-term support). From the Authen-
tication type dropdown list choose None. Uncheck the Use
controllers (uncheck to use minimal APIs) field and click
the Create button.

Try It Out
Whether you’ve used VS Code or Visual Studio 2022, press F5
to build the Web API project and launch a browser. If you get a
dialog box that asks if you should trust the IIS Express certifi-
cate, select Yes. In the Security Warning dialog that appears
next, select Yes. If you get an error related to privacy and/or
HTTPS. Open the \Properties\launchSettings.json file and
remove the “https://...” from the applicationURL property.

If you’re using Visual Studio 2022, you should be presented


with a Swagger home page (Figure 3) that allows you to im-
mediately try out the weather forecast API that’s included
as a sample in the Program.cs file. If you’re using VS Code,
you get a 404 error that the page cannot be found. Type
in the URL http://localhost:nnnn/swagger/index.html Figure 2: Use Visual Studio 2022 to create a new ASP.NET Core Web API project.
(where nnnn is your port number) and the Swagger home
page (Figure 3) should be displayed.
#nullable disable

Create Entity Classes Folder namespace AdvWorksAPI {


It’s always a best practice to group similar classes into a public partial class Product {
namespace, a folder, and/or a class library project. Right- public int ProductID { get; set; }
mouse click on the AdvWorksAPI project and add a new public string Name { get; set; }
folder named EntityClasses. Create the Product.cs class in public string Color { get; set; }
this new folder and add the following code to this new file. public decimal ListPrice { get; set; }
Feel free to create a separate class library to contain all your }
entity classes if you wish. }

codemag.com Create Maintainable Minimal Web APIs 15


#nullable disable

namespace AdvWorksAPI {
public class RouterBase {
public string UrlFragment;
protected ILogger Logger;

public virtual void AddRoutes(


WebApplication app) {
}
}
}

Create Product Router Class


As mentioned before, it’s a good idea to group similar classes
together into a folder and/or a class library. Add a folder un-
der which all your router classes are created. Right-click on
the AdvWorksAPI project and add a new folder named Router-
Classes. Add a new file named ProductRouter.cs file in this
new folder and add the following code into this new file:

namespace AdvWorksAPI {
public class ProductRouter : RouterBase {
public ProductRouter() {
UrlFragment = "product";
}
Figure 3: The Swagger home page allows you to try out your API calls. }
}

If you’re wondering what the directive #nullable disable is You can see that this ProductRouter class inherits from the
at the top of the file, .NET 6 requires all empty strings to ei- RouterBase class. It sets the UrlFragment property to the
ther be initialized in the constructor of the class or created value “product” because that’s going to be used for the end-
as a nullable string. If you don’t wish to use this behavior, point for all your mapping methods. Setting this property
include this directive at the top of your file. once helps you eliminate repeated code and gives you one
place to change your route name should you desire.
Create Router Base Class
Any time you’re going to create a set of classes that perform Get All Products
the same basic functionality, it’s a great idea to create either Add a protected virtual method to the ProductRouter class
an interface or a base class to identify those methods and named GetAll() to return a collection of Product objects, as
properties that should be in each class. For each of the router shown in Listing 2. I’m using a hard-coded collection of Prod-
classes, you should have a public property called UrlFragment uct objects here just so you can see the Minimal API in action
that identifies the first part of the path to your API. In the without having to worry about connecting to a database.
example shown in Listing 1, the value /product was repeated
many times in each of the app.Map*() methods. This is the Create a Get() Method to Return the IResult
value that you’re going to put into the UrlFragment property. Next, create a method named Get() that returns an IResult
If you have another router class, Customer for example, you object because that’s what’s expected from a Minimal API.
place the value /customer into this UrlFragment property. The Get() method uses the Results.Ok() method to return
a status code of 200, signifying that the method was suc-
At some point, you might wish to log messages or errors as cessful. The list of Product objects is returned to the calling
your Web API methods are called. Include a protected prop- program, wrapped within this result object.
erty named Logger of the type ILogger in this base class.
The property is to be injected into either the constructor /// <summary>
of your router classes or injected into just those methods /// GET a collection of data
that need it. /// </summary>
/// <returns>An IResult object</returns>
protected virtual IResult Get() {
A single public method, AddRoutes(), is needed in order to return Results.Ok(GetAll());
initialize the routes for router class. This method is called }
from the Program.cs file to initialize the routes you previously
created in the Program.cs file. You’re going to see the use of
this method as you work your way through this article. Create Method to Add Product Routes
You need to inform the Web API engine that this Get()
Right-click on the AdvWorksAPI project and add a new fold- method is an endpoint. To accomplish this, override the
er named Components. Add a new file named RouterBase. AddRoutes() method in the ProductRouter class as shown in
cs and add the code shown in the following code snippet. the following code snippet:

16 Create Maintainable Minimal Web APIs codemag.com


/// <summary> Listing 2: The GetAll() method returns a collection of product objects.
/// Add routes
/// <summary>
/// </summary>
/// Get a collection of Product objects
/// <param name="app">A WebApplication object</param> /// </summary>
public override void AddRoutes(WebApplication app) /// <returns>A list of Product objects</returns>
{ protected virtual List<Product> GetAll() {
app.MapGet($"/{UrlFragment}", () => Get()); return new List<Product> {
new Product {
} ProductID = 706,
Name = "HL Road Frame - Red, 58",
The AddRoutes() method calls the app.MapGet() method us- Color = "Red",
ing the WebApplication app variable passed in from the Pro- ListPrice = 1500.0000m
},
gram.cs file. The first parameter to the MapGet() method is new Product {
the route name the user sends the request to, such as http:// ProductID = 707,
localhost:nnnn/product or http://localhost:nnnn/customer. Name = "Sport-100 Helmet, Red",
Color = "Red",
ListPrice = 34.9900m
Use string interpolation to build this endpoint from the Url- },
Fragment property. The second parameter to app.MapGet() is new Product {
the method to produce some result. This method returns the ProductID = 708,
IResult object from the Get() method you just wrote. If an IRe- Name = "Sport-100 Helmet, Black",
Color = "Black",
sult isn’t returned from the method, the app.MapGet() method ListPrice = 34.9900m
automatically wraps the return value into a Result.Ok() object. },
new Product {
Modify Program.cs to Call the AddRoutes() Method ProductID = 709,
Now that you’ve created your ProductRouter class, it’s time Name = "Mountain Bike Socks, M",
Color = "White",
to try it out. Open the Program.cs file and remove all vari- ListPrice = 9.5000m
ables and methods related to the weather forecast API. },
Toward the bottom of the file, just above the app.Run() new Product {
method call, add the code to instantiate the ProductRouter ProductID = 710,
Name = "Mountain Bike Socks, L",
class and call the AddRoutes() method. Make sure to pass in Color = "White",
the instance of the WebApplication object contained in the ListPrice = 9.5000m
app variable. }
};
}
//*********************************************
// Add Product Routes
//*********************************************
new ProductRouter().AddRoutes(app);

Try It Out
Run the application and, on the Swagger home page, click
on the Get button for the /product path. Click on the Try it
Out button and click on the Execute button. You should see
the list of products you created in the Get() method appear
as JSON in the Response body section of the Swagger page,
as shown in Figure 4.

Congratulations! You’re now on your way to creating a more


maintainable approach to Minimal API development. Let’s
add more functionality in the ProductRouter class to give
you a complete CRUD API.

Get a Single Product Figure 4: The product route now appears on the Swagger page.
In addition to retrieving all products, you’re most likely go-
ing to need to retrieve a single product. Add an overload of
the Get() method to the ProductRouter class that accepts a /// <returns>An IResult object</returns>
single integer value named id. Use this id variable to search protected virtual IResult Get(int id) {
in the Product collection for where the id value matches one // Locate a single row of data
of the ProductId property values. The Product object that’s Product? current = GetAll()
located is returned from this method wrapped within the .Find(p => p.ProductID == id);
Results.Ok() object. If the id value isn’t found, a Results. if (current != null) {
NotFound() is returned that’s reported as a 404 Not Found return Results.Ok(current);
status code back to the calling program. }
else {
/// <summary> return Results.NotFound();
/// GET a single row of data }
/// </summary> }

codemag.com Create Maintainable Minimal Web APIs 17


Modify AddRoutes() Method created object, for example, /product/711. The second pa-
Now that you have the new method to retrieve a single prod- rameter is the actual entity with any data that has been
uct, you need to add this new route. Locate the AddRoutes() changed by performing the insert into the data store.
method and a call to app.MapGet(), as shown in the code
below. The first parameter is built using the UrlFragment /// <summary>
property, followed by a forward slash (/) and then you /// INSERT new data
need the variable name and data type enclosed within curly /// </summary>
braces, so it looks like {id:int}. Because you’re using string /// <returns>An IResult object</returns>
interpolation, you must escape the curly braces by adding protected virtual IResult Post(Product entity) {
an extra opening and closing brace around this value. // Generate a new ID
entity.ProductID = GetAll()
public override void AddRoutes(WebApplication app) .Max(p => p.ProductID) + 1;
{
app.MapGet($"/{UrlFragment}", () => Get()); // TODO: Insert into data store
app.MapGet($"/{UrlFragment}/{{id:int}}",
(int id) => Get(id)); // Return the new object created
} return Results.Created(
$"/{UrlFragment}/{entity.ProductID}", entity);
Try It Out }
Run the application and, on the Swagger home page, click
on the Get button for the /product/{id} path. Click on the Just as you did with the previous two methods you created, call
Try it Out button, type in the number 706 into the id field the appropriate app.Map*() method to register this new end-
(Figure 5) and click on the Execute button. You should see point. Locate the AddRoutes() method and a call to the app.
the first product from the product collection appear as JSON MapPost() method, as shown in the following code snippet.
in the Response body section of the Swagger page. Type in
an invalid number into the id field, such as 999, and you public override void AddRoutes(WebApplication app)
should see a 404 Not Found status code returned. {
app.MapGet($"/{UrlFragment}", () => Get());
app.MapGet($"/{UrlFragment}/{{id:int}}",
Insert a Product (int id) => Get(id));
Now that you have a way to read data from your Web API, it’s app.MapPost($"/{UrlFragment}",
time write code to insert, update, and delete data. Write a (Product entity) => Post(entity));
method in the ProductRouter class named Post() to which you }
pass a Product object to insert into your data store. Obviously,
you don’t have a data store in this simple example, but I’m sure Try It Out
you can easily extrapolate how this would work when using the Run the application and, on the Swagger home page, click on the
Entity Framework. To simulate this process, calculate the maxi- POST button for the /product path. Click on the Try it Out button
mum ProductID used in the collection, add one to this value, and you should see a Request body field appear with some basic
and assign this to the ProductID property of the Product object JSON for you to enter some data into. Type in the following JSON:
passed in. This is like what happens if you have an identity
property on the ProductID field in a SQL Server Product table. {
"productID": 0,
From a POST method, it’s standard practice to return a 201 "name": "A New Product",
Created status code when a new value is added to the data "color": "White",
store. Use the Results.Created() method passing in two pa- "listPrice": 20
rameters to return this 201 status code. The first param- }
eter is the endpoint that can be used to retrieve the newly
Click on the Execute button and you should see the Response
body appear with the following JSON:

{
"productID": 711,
"name": "A New Product",
"color": "White",
"listPrice": 20
}

This method simulates adding data to a database, such as


SQL Server, and the productID property is assigned a new
value from the database. The complete object is then passed
back to you along with the HTTP status code of 201: Created.

Update a Product
The public interface for updating an entity through a Web
Figure 5: Swagger allows you to enter an ID to call the Get(id) method. API method is to pass in the ID of the object to update along

18 Create Maintainable Minimal Web APIs codemag.com


with the object itself. Add a method to the ProductRouter a null value is assigned to the current variable. Check to see
class named Put() that accepts an integer ID variable and a if the current variable is not null, and if not, remove the
Product entity variable, as shown in Listing 3. object from the collection, which, in this case, simulates
removing the product row from the data store.
This method first looks up the Product object in the Products
collection. If the product is found, a Project object is returned The most common HTTP status to return in response to a suc-
into the current variable. If the product is not found, a null val- cessful call to the DELETE verb is a 204 No Content. You can
ue is assigned to the current variable. Check to see if the current return this status code with a call to the Results.NoContent().
variable is not null, and if not, update the properties of the cur- If the product object is not found because the ID passed in
rent variable with the values passed via the entity variable. This doesn’t exist in the data store, a 404 Not Found status is
process is, of course, just a single line of code when using the returned by calling the Results.NotFound() method.
Entity Framework, but for our purposes here, you need the three
lines of code to move the data from one object to another. You To register this new route, locate the AddRoutes() method
then return the updated entity in the Results.Ok() method. If and add a call to the app.MapDelete() method. The first pa-
the current variable is null, inform the calling program that the rameter to MapDelete() is the same as for the MapGet() and
object was not located by returning the value from the Results. MapPut() methods.
NotFound() method, which returns the 404 Not Found.
public override void AddRoutes(WebApplication app)
As you have done previously, locate the AddRoutes() method {
and add a call to the app.MapPut() method. The first parameter
to MapPut() is exactly the same as for the MapGet() method.
In the lambda expression, two parameters are accepted into Listing 3: The Put() method updates an existing product with the new data passed in.
this anonymous function, the id and the entity. The integer /// <summary>
value is expected to come from the URL line, and the Product /// UPDATE existing data
object is expected to be received from the body of the API call. /// </summary>
/// <returns>An IResult object</returns>
protected virtual IResult Put(int id, Product entity) {
public override void AddRoutes(WebApplication app) IResult ret;
{
// Locate a single row of data
app.MapGet($"/{UrlFragment}", () => Get()); Product? current = GetAll()
app.MapGet($"/{UrlFragment}/{{id:int}}", .Find(p => p.ProductID == id);
(int id) => Get(id));
if (current != null) {
app.MapPost($"/{UrlFragment}", // TODO: Update the entity
(Product entity) => Post(entity)); current.Name = entity.Name;
app.MapPut($"/{UrlFragment}/{{id:int}}", current.Color = entity.Color;
current.ListPrice = entity.ListPrice;
(int id, Product entity) => Put(id, entity));
} // TODO: Update the data store

Try It Out // Return the updated entity


ret = Results.Ok(current);
Run the application and on the Swagger home page, click }
on the PUT button for the /product/{id} path. Click on the else {
Try it Out button and enter a number such as 710 into the ret = Results.NotFound();
}
id field. In the Request body field modify the JSON to look
like the following. return ret;
}
{
"productID": 710,
"name": "A Changed Product", Listing 4: The Delete() method removes a single product from the data store.
"color": "Red", /// <summary>
"listPrice": 1500.00 /// DELETE a single row
} /// </summary>
/// <returns>An IResult object</returns>
protected virtual IResult Delete(int id) {
Click on the Execute button and you should see the Re- IResult ret;
sponse body appear with the same JSON you entered above.
// Locate a single row of data
If you then modify the ID field to be a bad product ID, such Product? current = GetAll()
as 999, and click the Execute button, you should see the .Find(p => p.ProductID == id);
404 status code returned.
if (current != null) {
// TODO: Delete data from the data store
GetAll().Remove(current);
Delete a Product
The last method to write to complete the CRUD logic is one // Return NoContent
ret = Results.NoContent();
to delete a product. Add a method named Delete() that ac- }
cepts an integer value, id, to the ProductRouter class, as else {
shown in Listing 4. This method first looks up the Product ret = Results.NotFound();
}
object in the Products collection using the value passed to
the id variable. If the product is found, a Project object is re- return ret;
turned into the current variable. If the product is not found, }

codemag.com Create Maintainable Minimal Web APIs 19


app.MapGet($"/{UrlFragment}", () => Get()); code returned is 204 No Content. Change the id field to an in-
app.MapGet($"/{UrlFragment}/{{id:int}}", valid product ID such as 999 and click on the Execute button.
(int id) => Get(id)); You should now see the 404 Not Found status code returned.
app.MapPost($"/{UrlFragment}",
(Product entity) => Post(entity));
app.MapPut($"/{UrlFragment}/{{id:int}}", Add Logging to Product Router
(int id, Product entity) => Put(id, entity)); An ILogger property was added to the RouterBase class at
app.MapDelete($"/{UrlFragment}/{{id:int}}", the beginning of this article. Let’s now use that property to
(int id) => Delete(id)); perform some logging to the console window from the Get()
} method in the ProductRouter.cs file. Open the ProductRouter.
cs file and use dependency injection to inject an ILogger object
Try It Out into the constructor, as shown in the following code snippet.
Run the application and on the Swagger home page, click on
the DELETE button for the /product/{id} path. Click on the public ProductRouter(
Try it Out button and enter a number such as 710 into the ILogger<ProductRouter> logger) {
id field. Click on the Execute button and you should see the UrlFragment = "product";
Logger = logger;
}
Listing 5: Create a loop to invoke all Router Classes’ AddRoutes() method.
//************************************* Locate the Get() method and call the Logger.LogInformation()
// Add Routes from all "Router Classes" method to log a message to any log listeners you set up.
//*************************************
using (var scope = app.Services.CreateScope())
{ protected virtual IResult Get() {
// Build collection of all RouterBase classes // Write a log entry
var services = scope.ServiceProvider.GetServices<RouterBase>(); Logger.LogInformation("Getting all products");
// Loop through each RouterBase class
foreach (var item in services)
{ // REST OF THE CODE HERE
// Invoke the AddRoutes() method to add the routes }
item.AddRoutes(app);
}
Add Logging as a Service
// Make sure this is called within the application scope Open the Program.cs file and just after the code that cre-
app.Run();
} ates the builder, WebApplication.CreateBuilder(args), add
the following lines of code to add logging to the list of ser-
vices that can be injected.
Listing 6: Create a hard-coded collection of customer objects.
// Add Logging
private List<Customer> GetAll() { builder.Logging.ClearProviders();
return new List<Customer> {
new Customer { builder.Logging.AddConsole();
CustomerID = 1,
FirstName = "Orlando", Just after these calls to register logging as a service, add
LastName = "Gee",
CompanyName = "A Bike Store", code to make your ProductRouter class a service as well.
EmailAddress = "orlando0@adventure-works.com",
}, // Add "Router" classes as a service
new Customer { builder.Services.AddScoped<RouterBase,
CustomerID = 2,
FirstName = "Keith", ProductRouter>();
LastName = "Harris",
CompanyName = "Progressive Sports", Dependency injection doesn’t just happen because you add
EmailAddress = "keith0@adventure-works.com",
}, the ILogger<ProductRouter> to the constructor of the Pro-
new Customer { ductRouter class. It happens when you let the ASP.NET engine
CustomerID = 3, take care of creating all instances of services. This means that
FirstName = "Donna",
LastName = "Carreras", you can’t create an instance of the ProductRouter class like
CompanyName = "Advanced Bike Components", you did earlier in this article. Go to the bottom of the Pro-
EmailAddress = "donna0@adventure-works.com", gram.cs file and remove the lines of code where you created
},
new Customer { a new ProductRouter, remove the call to app.Run() as well.
CustomerID = 4,
FirstName = "Janet", //*********************************************
LastName = "Gates",
CompanyName = "Modular Cycle Systems", // Add Product Routes
EmailAddress = "janet1@adventure-works.com", //*********************************************
}, new ProductRouter().AddRoutes(app);
new Customer {
CustomerID = 5,
FirstName = "Lucy", app.Run()
LastName = "Harrington",
CompanyName = "Metropolitan Sports Supply", To ensure that your ProductRouter class, or any other class-
EmailAddress = "lucy0@adventure-works.com",
} es that inherit from your RouterBase class, can participate
}; in dependency injection, you need to create a service scope
} by calling the app.Services.CreateScope() method, as shown

20 Create Maintainable Minimal Web APIs codemag.com


in Listing 5. Wrapped within a using block, retrieve a list of
services that are of the type RouterBase. Loop through each
of those services and invoke each ones’ AddRoute() method.
Finally, call the app.Run() method in order to ensure that
the entire set of registered endpoints are all running within
the same application scope.

Try It Out
Run the application and on the Swagger home page, click on
the Get button for the /product path. Click on the Try it Out
button, then click on the Execute button. You should then
see the list of products you created in the Get() method ap-
pear as JSON in the Response body section of the Swagger
page. Look in the Console window and you should see the
message “Getting all products” has appeared.

Create a Customer Router Class


The RouterBase and ProductRouter classes provides a nice
design pattern that you can use to build any Router class for
any other CRUD logic you require. Let’s create a Customer
class and a CustomerRouter class to work with customer
data. Because of the generic code you just wrote to invoke
the AddRoutes() method, all you have to do is to create a
new router class that inherits from the RouterBase class and
all new routes will be automatically registered when the ap-
plication starts.

Create a Customer Class


Right mouse-click on the EntityClasses folder and add a
new file named Customer.cs. Into this file, add the follow- Figure 6: Just by adding a new RouterBase class as a service, all endpoints are registered.
ing code to create a new Customer class with several prop-
erties.
Finally, register this new CustomerRouter class as a service Getting the Sample Code
#nullable disable by opening the Program.cs file and modify the code towards You can download the sample
the top to look like the following: code for this article by visiting
namespace AdvWorksAPI { www.CODEMag.com under the
public partial class Customer { // Add your "Router" classes as services issue and article, or by visiting
public int CustomerID { get; set; } builder.Services.AddScoped<RouterBase, www.pdsa.com/downloads.
public string FirstName { get; set; } ProductRouter>(); Select “Articles” from
public string LastName { get; set; } builder.Services.AddScoped<RouterBase, the Category dropdown.
public string CompanyName { get; set; } CustomerRouter>(); Then select “Create
public string EmailAddress { get; set; } Maintainable Minimal Web
} Try It Out APIs” from the Item dropdown.
} Run the application and, on the Swagger home page, all
the customer and product endpoints are now displayed as
Create a Customer Router Class shown in Figure 6. Go ahead and try out any of the cus-
Expand the RouterClasses folder, copy the ProductRouter. tomer routes to verify that they work as expected.
cs file and paste it right back into the RouterClasses folder.
Rename the file as CustomerRouter.cs. Open the Customer-
Router.cs file and perform a case-sensitive search and re- Summary
place within this document, changing all instances of Prod- In this article, you learned how to make Minimal API ap-
uct with Customer. Next, perform a case-sensitive search plications more maintainable. By using a good base class,
and replace and change all instances of product with cus- called RouterBase, you have a good start of a design pat-
tomer. Scroll down and locate the GetAll() method and fix tern for your Web API CRUD logic. By moving all your routes
up the hard-coded data to look like that shown in Listing 6. into separate classes, you prevent your Program.cs file
from growing out of control. Because you inherit from the
Next, scroll down and locate the Put() method and within RouterBase class, you write code one time in the Program.cs
the if (current != null) statement, modify the lines of code file to instantiate all router classes and register their routes.
that are showing errors. The new lines of code should look Creating additional router classes for objects such as em-
like the following: ployees, cars, sales orders, etc. is as simple as copying the
ProductRouter class, searching for all instances of “prod-
current.FirstName = entity.FirstName; uct”, and replacing with “employee,” “car,” etc.
current.LastName = entity.LastName;
current.CompanyName = entity.CompanyName;  Paul D. Sheriff
current.EmailAddress = entity.EmailAddress; 

codemag.com Create Maintainable Minimal Web APIs 21


ONLINE QUICK ID 2205041

Change-Tracking Mixed-State
Graphs in EF Core
Real life relationships can be hard and sometimes, in EF Core, they can be hard as well. EF Core’s change tracker has very specific
behavior with respect to related data but it may not always be what you expect. I want to review some of these behaviors so
you have a bit of guidance at hand, although I always recommend that you do some integration testing to be sure that your

EF Core code does what you’re anticipating. There are a My context is configured to expose DbSets for Author and
number of tools at hand to help you out. In fact, you could Book, configure my SQLite database, and seed some author
discover the behavior without ever calling SaveChanges be- and book data. If you want to try this out, the full code is
cause the key to the behavior is in the change tracker itself. available in the download for this article and on a reposito-
Whatever SQL it executes for you is simply a manifestation ry at github.com/Julielerman/CodeMagEFC6Relationships.
of the knowledge stored in the change tracker. However,
I still need a database to perform queries, so I’ll use the There is so much behavior to explore with this one set up.
SQLite provider. Why not InMemory? Because some of its But it’s also interesting to experiment with different com-
persistence behaviors are different from a real database. For binations of navigation properties and foreign keys. For ex-
Julie Lerman example, the InMemory provider updates key properties for ample, if Book had AuthorId but not an Author navigation
@julielerman new objects when they’re tracked, whereas for many data- property, some behavior will be different. It’s also possible
thedatafarm.com/contact bases, those keys aren’t available until after the database to minimize the classes and define relationships in the Flu-
generates key values for you. ent API mappings; for example, you could remove the Books
Julie Lerman is a Microsoft
property from Author, and the Author and AuthorId proper-
Regional director, Docker
In a previous CODE Magazine article called Tapping into ties from Book and still have a mapped relationship.
Captain, and a long-time
EF Core’s Pipeline (https://www.codemag.com/Arti-
Microsoft MVP who now
counts her years as a coder cle/2103051), one of those taps I wrote about was the Ch- And then even more behavior differences are introduced with
in decades. She makes angeTracker.DebugView introduced in EF Core 5. I’ll use that nullability. For example, Book.AuthorId is a straightforward
her living as a coach and to explore the change tracker as I walk through a number of integer which, by default, is non-nullable. There’s nothing
consultant to software persistence scenarios with related data. here to prevent you from leaving AuthorId’s value at 0. How-
teams around the world. ever, the default mappings infer the non-nullable AuthorId to
mean that, in the database, a Book must have an Author and
You can find Julie presenting
on Entity Framework,
Starting with a Simple One-to-Many therefore AuthorId can’t be 0. Your code must control that
Domain-Driven Design and For this example, I’ll adopt the small book publishing house rule to avoid database inconsistencies (and database errors).
other topics at user groups data model from my recently released Pluralsight Course, EF
and conferences around Core 6 Fundamentals. This publisher only publishes books My goal here is to show you that there are so many varia-
the world. Julie blogs at written by one author, therefore I have a straightforward tions to persist data just on this one specific setup and leave
thedatafarm.com/blog, one-to-many relationship between author and book. One you with the knowledge and tools to determine what to ex-
is the author of the highly author can have many books but a book can only ever have pect from your own domain and data models.
acclaimed “Programming one author. My initial classes are defined in the most com-
Entity Framework” books, mon way, where Author has a list of Books and the Book
and many popular videos type has both a navigation property back to Author along Persisting When Objects are Tracked
on Pluralsight.com. with an AuthorId foreign key. Whether or not the change tracker is already aware of the
related data affects how it treats that data. Let’s look at a
How your classes are designed can impact behavior. This few scenarios where a new book is added to an author’s col-
sample is an explicit choice for a stake in the ground of lection of books while these objects are being tracked by an
what to expect from the change tracker. in-scope DbContext.

public class Author In this first scenario, I’ve used an instance of PubContext
{ to retrieve an author from the database with FirstOrDefault
public int AuthorId { get; set; } query. The context stays in scope and is tracking the author.
public string FirstName { get; set; } I then create a new book and add it to the author’s Books list.
public string LastName { get; set; } Then, instead of calling SaveChanges, I’m calling Change-
public List<Book> Books { get; set; } = Tracker.DetectChanges to get the change tracker to update its
new List<Book>(); understanding of the entities it’s tracking. SaveChanges in-
} ternally calls DetectChanges, so I’m just using it explicitly and
public class Book avoiding an unneeded interaction with the database. Then
{ I use ChangeTracker’s DebugView.ShortView to get a simple
public int BookId { get; set; } look at what the context thinks about its entities.
public string Title { get; set; }
public Author Author { get; set; } void AddBookToExistingAuthorTracked()
public int AuthorId { get; set; } {
} using var context = new PubContext();

22 Change-Tracking Mixed-State Graphs in EF Core codemag.com


var author = context.Authors.FirstOrDefault(); aware of the author object being passed in. But only the author,
var book = new Book(“A Great Book!”); which is the root object, not the entire graph.
author.Books.Add(book);
context.ChangeTracker.DetectChanges(); To be clear: If I call the method with DetectChanges com-
var dv = context.ChangeTracker. mented out:
DebugView.ShortView;
} context.Update(author);
//context.ChangeTracker.DetectChanges();
This is such a straightforward scenario that I’m not sur- var dv=context.ChangeTracker.DebugView.ShortView;
prised by the contents of the DebugView.
The ChangeTracker is only aware of the Author and not the
Author {AuthorId: 1} Unchanged new Book.
Book {BookId: -2147482647} Added FK {AuthorId: 1}
Author {AuthorId: 1} Modified
The Author is Unchanged and the Book is marked as Added.
It knows that its AuthorId foreign key is 1, and it has a tem- DetectChanges is critical for ensuring that changes to the
porary key value that will be fixed up after it gets the new graph are comprehended and, again, calling SaveChanges
value from the database. would have rectified that. But this gives you greater insight
into the workings of the ChangeTracker API.
Keep in mind that this temporary key is known only by the con-
text. If you were to debug the book object directly, its BookId is In the long run, because the context was still tracking the
still 0. If you were to call SaveChanges, the newly generated val- objects, that call to Update wasn’t even necessary. But
ue of BookId would be returned from the database and BookId when I called it anyway, it had a side-effect of forcing EF
would get updated in the object and in context’s details. Core to update the Author that had never been edited.

Now, as a matter of witnessing a change in EF Core’s behavior, I’ll


modify the code to do something that will result in a problematic Same Mixed State Graph, Disconnected
side effect. This is an example of the kind of mistake that’s easily What if the context weren’t tracking the objects? For exam-
made with EF Core if you aren’t aware of these nuances. ple, when you’re writing an ASP.NET Core app and handling
data coming in from a request, you’re dealing with a new
We know the true state of these objects: The Author is Un- context on each request.
changed and the Book is Added.
In an ASP.NET Core API controller method, the REST method
What if IntelliSense prompted me with the Update method transforms JSON coming in via a request and automatically trans-
and I thought “ahhh, I’m updating this author with a new forms it to the expected object. To emulate that, I’ve just created
book, so I’ll call that method first”? So I’ve added a call to the resulting object, named existingAuthor, an Author graph
Update in my logic. with an existing Author whose AuthorId is 2, and a new Book.

author.Books.Add(book); var existingAuthor = new Author(“Ruth”, “Ozeki”)


context.Authors.Update(author); { AuthorId = 2 };
context.ChangeTracker.DetectChanges(); existingAuthor.Books.Add(
new Book(“A Tale for the Time Being”));
The context has determined that because I used the Update
method, the author’s state should now be set to Modified. An important attribute to note about this graph is that the
It doesn’t care that I didn’t change any values. It’s simply incoming book not only has no BookId but it also doesn’t
responding to my instruction to Update. And if that object have an AuthorId. I’ve created it this way because it’s quite
is the root of a graph, as it is in this case, it will apply that possible that incoming data be set up this way and I want to
Modified state to every object in that graph with one excep- be able to handle that scenario.
tion: Objects with no key values (like the Book) are, by de-
fault, always marked Added. So now the DebugView shows: Let’s explore what happens with the various options for get-
ting a context to be aware of the state of this graph so I can
Author {AuthorId: 1} Modified get the author’s new book into the database.
Book {BookId: -2147482647} Added FK {AuthorId: 1}
Adding the Mixed State Graph
On SaveChanges, that means you’ll get an unneeded com- First, of course, I’ll need a new PubContext. Then, well, I
mand sent to the database to update all of the properties want to add this graph to the context, right? Let’s try that.
of the author row. This may not seem like a problem in a
demo, but could result in performance issues in production. using var context =new PubContext();
Additionally, if you’re using row versioning in this table for context.Authors.Add(existingAuthor);
auditing purposes, this action will result in misinformation
because the user didn’t really update the author. EF Core After calling DetectChanges, the DebugView shows me that
was mistakenly told that they did. the ChangeTracker thinks that the Author and the Book are
both new and need to be added to the database!
There’s something else interesting to show you about the effect
of DetectChanges on graphs. Because I’m now using an explicit Author {AuthorId: 2} Added
DbContext method—Update—the ChangeTracker is immediately Book {BookId: -2147482647} Added FK {AuthorId: 2}

codemag.com Change-Tracking Mixed-State Graphs in EF Core 23


But wait! The context was smart enough to know that an it in via the book object didn’t? Consider these objects more
object with no key value should be added, so why doesn’t it closely. The book was a member of the Author.Books property.
also assume that an object with key that has a value must When I attached the author, the context was able to traverse
already exist in the database? Well, there are too many into the book object. However, even though the Book class has
reasons that this assumption could fail. EF Core is simply an Author navigation property, that property wasn’t populated
following your instructions: You called Authors.Add, so you with the author object. So when I called context.Add(book),
must therefore have wanted to Add (i.e., insert) the author! the context wasn’t able to detect the author object.

There’s something else interesting to note. The book’s Au- All of these details are hard to keep in your head, even if you
thorId is now 2. Recall that the book came in without any knew them once or twice before! I always create integration
AuthorId property. Because I passed the entire graph into tests to make sure I haven’t forgotten a behavior.
the context, when I called DetectChanges, EF Core figured
out that because of the relationship, the Book.AuthorId Taking More Control over the Graph’s State
should use the key value from the Author object. There’s a way to make this pattern work, however: By ex-
plicitly setting the foreign key property because you do
The Add method is a problem because I can’t insert that author have easy access to it. DetectChanges is redundant because
into the database. That will create an error in the database. the Add method set the state of the Book immediately. Of
course, SaveChanges will call that anyway, but again, it’s an
Using Update with the Mixed State Graph important behavior to be aware of.
What’s my next option? Well, another conclusion might be
that the author has changed because they have a new book. var book = existingAuthor.Books[0];
This reasoning might lead me to the Update method. book.AuthorId = existingAuthor.AuthorId;
context.Add(book);
//context.ChangeTracker.DetectChanges();
context.Authors.Update(existingAuthor);

One thing I like about just setting the foreign key is that I’m not
This isn’t the correct path either. You saw this problem earli-
relying on “magic” to have success with my persistence logic.
er. Explicitly calling Update causes every object in the graph
to be marked as Modified (except for any that don’t have a
key value). Again, the Author would be marked Modified and Tracking Single Entities with
the Book marked Added and you’ll get a needless command
to update all of the Author’s properties sent to the database.
the Entry Method
Here’s another place you may be surprised with how EF Core
reacts to our incoming graph.
On the other hand, if you know that Author was updated, or
you’re not concerned about the extra database trip or about
Given that I’m a fan of explicit logic, I’m also a fan of the
audit data, Update would be a safe bet.
very explicit DbContext.Entry method. The beauty of the
Entry method is that it only pays attention to the root of
Focusing on the Graph’s Added Object
whatever graph you use as its parameter. It’s the only clean
You learned (above) that without DetectChanges, the Up-
way to separate an entity from a graph with EF Core and, be-
date (and Add and Remove) methods only acknowledge the
cause of this strict behavior, you don’t have to make guesses
root of the graph. So what if I pass in the book to context.
about what will happen within a graph.
Add method instead of the author?
Yet, it still may surprise you with the mixed state graph.
var book = existingAuthor.Books[0];
When I use Entry to start tracking the book in my graph:
context.Add(book);
//context.ChangeTracker.DetectChanges();
var book = existingAuthor.Books[0];
context.Entry(book).State=EntityState.Added;
Because the tracker is only aware of the book, it isn’t able to read
the Author’s key property and apply it to Book.AuthorId as you the Entry method ignores the author that’s connected to that
saw it do earlier. The DebugView shows that AuthorId is still 0: book. As expected, it sets the state of that book to Added.
But because the ChangeTracker is now unaware of the Author
Book {BookId: -2147482647} Added FK {AuthorId: 0} object, it can’t read existingAuthor.AuthorId to set the book’s
foreign key property and therefore, book’s AuthorId is still 0.
What if I added the DetectChanges back into the logic? Well, there’s
a surprise. That doesn’t work either! Book.AuthorId is still 0. Book {BookId: -2147482647} Added FK {AuthorId: 0}

The fact that DetectChanges doesn’t fix the foreign key also As you just learned above, your code needs to take respon-
means that calling SaveChanges—which calls DetectChang- sibility for the foreign key. Therefore, I’ll just set it myself
es—causes the resulting database command to fail because before calling the Entry method:
my design is such that a Book must have an Author. An Au-
thorId value of 0 causes a foreign key constraint error in the var book = existingAuthor.Books[0];
database. Notice that the failure is in the database. EF Core book.AuthorId= existingAuthor.AuthorId;
won’t protect you from making this mistake which means that context.Entry(book).State=EntityState.Added;
again, you need to ensure that your code enforces your rules.
Now there’s no question about AuthorId being 2.
Did you find it strange that pushing the graph into the context
using the author object pulled in the entire graph but pushing Book {BookId: -2147482647} Added FK {AuthorId: 2}

24 Change-Tracking Mixed-State Graphs in EF Core codemag.com


While, yes, there is an extra line of code here, but this Here’s where your own logic can be written to glean some
makes my logic dependable and gives me confidence. My attributes. For example, Author has an AuthorId value of 2.
integration tests give me a lot more confidence, though! If your business logic is such that you know that any object
with an ID value present must have come from the database,
then you can at least determine that this object is either
The Handy Attach Method unchanged or has been edited.
So far, I’ve shown you quite a few unsuccessful or “assisted”
ways to get that new book inserted into the database when In this case, if you are given no other clues, updating this
it’s part of a mixed state graph. I hope you appreciate this Author covers your bases. If any of the data was changed—
better understanding of not only what to avoid, but also why. for example, maybe the user fixed a typo in the Author’s last
name—the Update will be sure to get that into the data-
Is there a “best” way to do this or just an easy and depend- base. If none of the data was changed, but you have no idea
able pattern? I think the Entry method, along with the ex- if that’s the case, suddenly that “unnecessary update” may
plicit code to ensure that the FK is set, is dependable and be seen as not a terrible thing. That depends on what kind
memorable. As I said, I do prefer the explicit logic. It makes of performance expectations you have. Maybe the unneces-
me feel more in control of my code’s behavior and less de- sary updates are too taxing and you have determined that
pendent on under-the-covers magic. it’s smarter to first grab that record from the database and
compare its values to the incoming Author before deciding
There is one other dependable pattern, which is the Attach to ignore it or update it. The controller template that uses
method. I’ll pass my graph into Authors.Attach. EF Core with Actions, does make a quick trip to the database
but it does that to verify that the row indeed exists in the
context.Authors.Attach(existingAuthor);
database before calling for an update and potentially trig-
context.ChangeTracker.DetectChanges();
gering a database error for a nonexistent row.
Attach tells the ChangeTracker to just start tracking the graph
The bottom line is driven by how much magic you will accept
but not to bother setting EntityState. The “one true rule”
and what type of load there is on your application. From
you’ve seen repeatedly now for Add and Update also applies
there, you can use your knowledge of EF Core’s behavior to
to Attach: Any objects in the graph that have no key value are
choose your path.
marked as Added. Now the existingAuthor is connected and
being tracked but has the default state of Unchanged. The
shiny new book with no BookId becomes Added. So Many Variants to Explore
Author {AuthorId: 2} Unchanged Keep in mind that although you’ve learned about a lot of
Book {BookId: -2147482647} Added FK {AuthorId: 2} the ChangeTracker behavior in response to various ways of
tracking this graph, I’ve focused on a particular scenario:
In this case, Attach is an excellent way to assure the state a mixed state graph where one object came from the data-
of both objects correctly and the Book’s AuthorId foreign base and wasn’t modified and the other object was new. I
key property. SaveChanges will only send one command, the also used pretty standard class definitions. My Author class
correct command: an INSERT for the new book. has a list of books and the Book class has both the Author
navigation property and a non-nullable AuthorId foreign
This works perfectly for this scenario. You know that the key property. And with that, you saw many different ef-
Author is unchanged and you’re also relying on the rule that fects of persisting these objects, whether they were being
any new objects will be handled correctly. tracked from retrieval to saving, or they were disconnected
from their original context, as with a Web application.

Dealing with Unknown State If you start tweaking other factors, such as removing navi-
I’ve pointed out a few times now that there’s a problem with gation properties or FK properties or changing the nul-
calling Update on entities that haven’t been edited. They get lability of the foreign key, you’ll have another batch of
sent to the database with an UPDATE command, which can be resulting behavior to be aware of.
a wasted call and possibly have a bad effect on performance.
Remember that I chose to only use DetectChanges directly
However, there’s an interesting use for Update: when you and not SaveChanges. That’s a nice way of testing things
have objects coming in and you have no idea if they need out without bothering with the database. Some of those
to be added, updated, or ignored. Notice that I am leav- scenarios where it was necessary to call DetectChanges
ing “deleted” out of that list. You must supply some way to get the expected behavior will be solved by calling
of detecting whether an object is to be deleted. In typical SaveChanges.
controller methods, including those generated by the Visual
Studio templates, you do have explicit methods for insert- As I stated earlier, I can never keep all of these behaviors
ing, updating, and deleting, so there’s no mystery. in my head. I depend on integration tests to make sure
things are working as I expect them to. And because I’m
What if your logic is different from these controller methods not using a database in any of the examples above, you
and a request passes in an Author object with—or maybe can’t write tests without a database provider, not even the
even without—attached objects. I’ll use the same code that InMemory provider. You can just build assertions about the
represents an existing Author to demonstrate: state of the entities within the ChangeTracker.

var existingAuthor = new Author(“Ruth”, “Ozeki”)  Julie Lerman


{ AuthorId = 2 }; 

codemag.com Change-Tracking Mixed-State Graphs in EF Core 25


ONLINE QUICK ID 2205051

Fast Application Most of the applications I’ve written in my


career have involved some sort of database
for persisted application state. Until recently,

Persistence that’s usually meant using a relational database


engine (RDBMS), like SQL Server, paired with
some sort of object relational mapping (ORM)

with Marten tool, like Entity Framework Core or Dapper. More


recently, though, it seems that many teams are

26 Fast Application Persistence with Marten codemag.com


switching to lower ceremony “NoSQL” document databases. QuickStart with Marten
Here’s an important definition: A document-oriented data- Now, assuming that you have a new project where you want to
base, or document store, is a computer program and data use Marten, add a NuGet reference to the main Marten library:
storage system designed for storing, retrieving, and man-
aging document-oriented information, also known as semi- dotnet add package Marten
structured data.
In any application targeting Marten, you need to have a sin-
Enter the open-source Marten library (https://martendb.io) gle instance of the DocumentStore class that “knows” how
that allows .NET developers to use the rock-solid Post- to translate objects back and forth to the underlying data-
gresql database engine as a document database and event base and acts as the main entry point to all Marten-backed Jeremy D. Miller
store. The other authors of Marten and I chose Postgresql persistence. I’ll get to more advanced usage quickly in this jeremydmiller@yahoo.com
specifically because of its unique JSONB storage type, article, but for right now, spin up a new DocumentStore with www.jeremydmiller.com
where raw JSON data is stored in an efficient binary rep- all of Marten’s default behaviors with this syntax: @jeremydmiller
resentation (see https://www.postgresql.org/docs/cur-
Jeremy Miller is the Senior
rent/datatype-json.html for more information). From the // Step 1, build a DocumentStore
Director of Software Archi-
.NET side, Marten leverages the robust JSON serialization var connectionString = “your connection string”; tecture at MedeAnalytics.
libraries in .NET, like Newtonsoft.Json or the more recent using var store = DocumentStore Jeremy began his software
System.Text.Json library, to effectively read and write .For(connectionString); career writing “Shadow IT”
objects to and from database storage through JSON applications to automate
serialization. Now that you have a Marten document store ready to go as his tedious engineer-
you try to build a fictional issue tracking system, let’s back ing documentation, then
Leaving Marten’s event store functionality aside for another up and write a document type to represent an issue and its wandered into software
time, let’s dive into the document database features as soon constituent tasks like this one: development because it
as you have Postgresql running locally. looked like more fun.
public class Issue Jeremy is heavily involved
{ in open-source .NET
Running Postgresql Locally in Docker public Guid Id { get; set; } development as the lead
First off, you need a Postgresql database. My preference public string Title { get; set; } developer of Marten,
these days is to just run local development databases in public string Description { get; set; } Lamar, Alba, and other
Docker containers so that it’s easy to spin up and tear down public bool IsOpen { get; set; } projects in the JasperFx
development databases at will as I switch between code- family. Jeremy occasionally
manages to write about
bases. To that end, here’s the Docker compose file we use for public IList<IssueTask> Tasks { get; set; }
various software topics
Marten itself that gives you an empty Postgresql database = new List<IssueTask>();
at www.jeremydmiller.com.
called marten_testing: }

version: ‘3’ In that class, the child IssueTask type is this:


services:
postgresql: public class IssueTask
image: «ionx/postgres-plv8:12.2» {
ports: public string Title { get; set; }
- «5432:5432» public string Description { get; set; }
environment: public DateTimeOffset? Started { get; set; }
POSTGRES_PASSWORD: postgres public DateTimeOffset Finished { get; set; }
POSTGRES_USER: postgres }
POSTGRES_DB: marten_testing
NAMEDATALEN: 100 Now that you have a Marten DocumentStore and your Issue
document type, let’s write some code to persist an issue:
As long as you have Docker Desktop installed on your de-
velopment box, you’ll be able to quickly spin up a new Post- var issue = new Issue
gresql database by using this command in the command line {
application of your choice: Title = "Bad Problem",
IsOpen = true,
docker compose up -d Description = "Need help fast!",
Opened = DateTimeOffset.UtcNow,
Note that you’ll need to call that with the terminal location Tasks = { new(title:
at the same directory that holds your docker-compose.yml "Investigate", description:
file. Likewise, when you’re done working with that database, "Do some troubleshooting") }
you can shut it down and completely remove the running };
Docker container with: // start a new IDocumentSession
using var session = store.LightweightSession();
docker compose down session.Store(issue);

codemag.com Fast Application Persistence with Marten 27


await session.SaveChangesAsync() data by its primary key, right? To show why you’re better off
.ConfigureAwait(false); using Marten than writing your own little document store,
let’s move on quickly to see some of Marten’s support for
Let’s talk about that code above: querying documents in the database.

• I built a new Issue object with a title and descrip- In the past when I’ve described Marten to other developers,
tion, plus marked it as being open. I also added an they frequently say “but you can’t query within the JSON
initial task within the Issue. data itself though, right?” Fortunately, Marten has robust
• I created a new IDocumentSession object (“session” support for LINQ querying that happily queries within the
in the code up above) that you’ll use to both query a stored JSON data. As an example, let’s say that you want to
Marten database and persist changes. The IDocument- query for the last 10 open issues with a LINQ query:
Session both implements the unit of work pattern to
govern logical transaction boundaries and represents var openIssues = await session
a single connection to the underlying database, thus .Query<Issue>()
making it important to ensure that the session is dis- .Where(x => x.IsOpen)
posed to release the underlying database connection .OrderByDescending(x => x.Opened)
when you’re done with the session. .Take(10)
• I explicitly told Marten that the new Issue document .ToListAsync().ConfigureAwait(false);
should be persisted as an “upsert” operation.
• I committed the one pending document change with As I’ll show later in this article, it’s not only possible to
the call to SaveChangesAsync(). query from within the structured JSON data, but you can
also add computed indexes in Marten that work within the
It may be more interesting, so let’s talk about what I did not stored JSON data.
have to do in any of the code above.
Admittedly, Marten’s LINQ support is short of what you may
I didn’t have to write any explicit mapping of the Issue doc- be used to with Entity Framework Core or the older NHiber-
ument type to any kind of Postgresql table structure. Marten nate tooling, but all the most common operators and usages
stores the document data as serialized JSON, so there isn’t of Where() clauses are well supported. Marten also has some
a lot of code-intensive mapping configuration like you’d specific extensions for LINQ that many users find useful.
frequently hit with Object Relational Mappers like Entity
Framework Core or the older NHibernate.
Relations Between Documents
There was no need to first perform any kind of database The sweet spot for document database approaches, like Mar-
schema migration to set up the underlying Postgresql data- ten’s, is when the entities are largely self-contained with few
base schema. Using Marten’s default “development friendly” relationships between different types of entities. Because Mar-
configuration that you used to construct the DocumentStore ten is built on top of a relational database engine, it still has
up above, Marten quietly builds the necessary database tables the ability to enforce relational integrity between entity types.
and functions to store the Issue documents on behind the
scenes the first time you try to write or read Issue documents. To illustrate this, let’s introduce a new User document type
As a developer, you can focus on just writing functionality and within our issue tracking system:
let Marten deal with the grunt work of building and modify-
ing database schema objects. Again, compare that experience public class User
with Marten to the effort you have to make with Object Rela- {
tional Mapper tools to craft database migration scripts. public Guid Id { get; set; }
public string FirstName { get; set; }
Nowhere in the code did I have to assign an identity (pri- public string LastName { get; set; }
mary key) to the new Issue document. Marten’s default as- public string Role { get; set; }
sumption is that a public property (or field) named Id is the }
identity for a document type. Because Issue.Id is of type
GUID, Marten automatically assigns a sequential GUID for Now, I’d like all of the Issue documents to refer to both an
new documents passed into the IDocumentSession.Store() assigned user and to the original user who created the is-
method that don’t already have an established identity. In sue. I’ll add a pair of new properties to the Issue document:
this case, Marten happily sets the value of Id onto the new
Issue document in the course of the Store() method. public class Issue
{
To illustrate the identity behavior, let’s immediately turn public Guid Id { get; set; }
around and load a new copy of the new Issue document with
this code: public Guid? AssigneeId { get; set; }
public Guid? OriginatorId { get; set; }
// Now let's reload that issue
var issue2 = await session // Other properties
.LoadAsync<Issue>(issue.Id) }
.ConfigureAwait(false);
To create foreign keys from the Issue document type to the new
So far, you’ve seen nothing that would be difficult to repro- User document type, I need to revisit the DocumentStore boot-
duce on your own. After all, you’re just saving and loading strapping from before and use this code to configure Marten:

28 Fast Application Persistence with Marten codemag.com


var connectionString = "your connection string"; // Marten will fill this dictionary for us
using var store = DocumentStore.For(opts => var users = new Dictionary<Guid, User>();
{
opts.Connection(connectionString); var openIssues = await session.Query<Issue>()
.Where(x => x.IsOpen)
// Set up the foreign key relationships .OrderByDescending(x => x.Opened)
opts.Schema.For<Issue>() .Take(10)
.ForeignKey<User>(x => x.AssigneeId) // Marten specific Linq extension
.ForeignKey<User>(x => x.OriginatorId); .Include(x => x.AssigneeId, users)
}); .ToListAsync().ConfigureAwait(false);

The introduction of the new User document type and the In the query above, Marten stores the related User docu-
foreign key relationships from Issue to User will require ments in the Users dictionary by the User.Id.
changes to the underlying database, but not to worry, be-
cause Marten detects that for you and happily makes the As an aside, the Include() operator is specific to Marten
necessary database changes for you the first time you read (other .NET tools have similar capabilities, and Marten’s
or write Issue documents. support was itself inspired by RavenDb’s equivalent fea-
ture). When using Marten, it’s important to consider wheth-
Foreign key relationships with Marten will work exactly as er or not any generalized abstraction that you place around
you’d expect, if you have any experience with relational da- Marten to avoid vendor lock-in may eliminate the ability to
tabases, as shown in this code: use the very advanced features of Marten that will make Why a Document
your system perform well. Database?
var issue = new Issue
From my own experience,
{
document databases can
// reference a non-existent User Unit of Work Transactions with Marten sometimes enable much
AssigneeId = Guid.NewGuid() As stated earlier, the Marten IDocumentSession is an better developer productivity
}; implementation of the unit of work pattern. According to by eliminating so much of
the original statement by Martin Fowler, the unit of work: the code ceremony that
session.Store(issue); Maintains a list of objects affected by a business trans- is forced upon you by the
action and coordinates the writing out of changes and RDBMS + ORM combination.
// This call will fail! the resolution of concurrency problems.
await session.SaveChangesAsync() Because there’s less effort
.ConfigureAwait(false); Let’s jump right into a contrived example that shows an necessary to map your
IDocumentSession variable named session used to cre- object model in code to
Maybe more interesting is the ability in Marten to fetch re- ate and commit a single database transaction that de- the underlying storage,
lated documents when querying within one document type. letes some Issue documents, stores changes to User it’s far easier to iterate or
For example, let’s say that you’re building a Web service documents, and stores a brand-new issue in one single evolve your object model
over time compared to
where you’ll be making the same query for the 10 most re- transaction:
the more traditional relational
cent open issues, but this time, you also need to query for
database approach.
the related User documents for the people assigned to these session.Delete<User>(oldUserId);
issues. Document databases are
session especially effective with
You could use two separate queries, like this: .DeleteWhere<Issue> complex, hierarchical data
(x => x.OriginatorId == fakeUserId); structures that can often be
var openIssues = await session.Query<Issue>() a poor fit in relational models.
.Where(x => x.IsOpen) // store some User documents In addition, document
.OrderByDescending(x => x.Opened) session.Store(newAdmin, reporter); databases excel with
.Take(10) polymorphic collections that
.ToListAsync().ConfigureAwait(false); // store a new Issue frequently bedevil ORMs.
session.Store(new Issue
// Find the related User documents {
var userIds = openIssues Title = "Help!"
.Where(x => x.AssigneeId.HasValue) });
.Select(x => x.AssigneeId.Value)
.Distinct() await session.SaveChangesAsync()
.ToArray(); .ConfigureAwait(false);

var users = await session Hopefully, that looks very straightforward, but there are a
.LoadManyAsync<User>(userIds) couple of valuable things to note that set Marten apart from
.ConfigureAwait(false); some other alternative document databases:

The general rule of thumb for better performance using • Marten is happily able to process updates to multiple
Marten is to reduce the number of round trips between the types of documents in one transaction.
application and database server, so let’s use Marten’s In- • By virtue of being on top of Postgresql, Marten has
clude() functionality to fetch the related User documents ACID-compliant transactional integrity where data is
within the same round trip to the database, like this: always consistent, as opposed to the BASE model of

codemag.com Fast Application Persistence with Marten 29


many other true NoSQL databases where there is said [FromBody] NewIssue body,
to be “eventual consistency” between the data writes [FromServices] IDocumentSession session)
and database queries. {
var issue = new Issue
The last point is an important differentiator from other doc- {
ument database approaches and arguably the main reason Title = body.Title,
that Marten exists today, as it was specifically written to Description = body.Description,
replace a true, standalone document database with weak OriginatorId = body.UserId,
data consistency that was performing poorly in a large pro- IsOpen = true,
duction system. Opened = DateTimeOffset.UtcNow
};

Integration with ASP.Net Core session.Store(issue);


In real usage, you’re most likely going to be using Marten
within a .NET application that uses the generic host builder. return session.SaveChangesAsync();
To that end, recent versions of Marten fully embrace an idi- }
omatic .NET approach to configuring Marten, like this sam- }
ple from a .NET 6 Web application:
Because the IDocumentSession is registered as “scoped,” I
var builder = WebApplication know that ASP.NET Core itself will be responsible for calling
.CreateBuilder(args); Dispose() on the active session for the HTTP request.
builder.Host.ConfigureServices(services =>
{
var connectionString = builder Tracing and Logging
.Configuration Marten is absolutely meant for “grown up” software de-
.GetConnectionString("marten"); velopment, so we’ve taken very seriously the role of trac-
services.AddMarten(opts => ing and logging throughout the Marten codebase. If you
{ bootstrap Marten within a .NET Core application with the
opts.Connection(connectionString); AddMarten() method, Marten will be logging all database
calls and database errors through the generic .NET ILogger
opts.Schema.For<Issue>() interface.
.ForeignKey<User>(x => x.AssigneeId)
.ForeignKey<User> That’s great and all, but now, you might ask, how about au-
(x => x.OriginatorId); tomatically tagging documents that are persisted through
}); Marten with the timestamp, the current user, and the cor-
}); relation ID or trace identifier of the current activity (in the
case of the issue-tracking Web application, this will be the
// Other configuration trace identifier for the HTTP request). The concept of last
modified timestamps is a default behavior in Marten, so
The call to AddMarten() above adds service registrations to that’s already taken care of. To add correlation ID and cur-
the application’s Dependency Injection container for: rent user name tracking to the document storage, I’m going
to break into the Marten configuration and turn on those
• IDocumentStore as singleton scoped metadata fields for all documents like so:
• IDocumentSession is “scoped” such that you can ex-
pect to have a unique session for each HTTP request. var builder = WebApplication.CreateBuilder(args);
• IQuerySession (a read-only subset of IDocumentSes- builder.Host.ConfigureServices(services =>
sion) as “scoped” {
services.AddMarten(opts =>
To show that integration, let’s say that you want to create a {
simple Web service endpoint to create a new Issue with this // Other configuration
input body:
// Turn on extra metadata fields for
public class NewIssue // correlation id
{ // and last modified by (user name)
public Guid UserId { get; set; } // tracking
public string Title { get; set; } opts.Policies.ForAllDocuments(m =>
public string Description { get; set; } {
} m.Metadata.CorrelationId.Enabled
= true;
And this controller code: m.Metadata.LastModifiedBy.Enabled
= true;
public class CreateUserController });
: ControllerBase
{
[HttpPost("/issues/new")] });
public Task PostNewIssue( });

30 Fast Application Persistence with Marten codemag.com


Making that configuration change tells Marten that the public Guid IssueId { get; set; }
table for each document type now needs an extra column }
for tracking the correlation ID and last modified by values
for each document update. Yet again, Marten now “knows” Next, let’s author the simplest possible conceptual control-
about the extra metadata fields on each document storage ler method to implement the new Web service endpoint for
table and automatically adds these columns to any exist- open issues by user:
ing tables on the first usage of each specific document
type. [HttpGet("/issues/open/user/{userId}")]
public async Task<IReadOnlyList<IssueView>> GetOpenIssues(
The actual values will be assigned from the corresponding Guid userId,
IDocumentSession.CorrelationId and IDocumentSession. [FromServices] IQuerySession session)
LastModifiedBy values. To tie all of this together and apply {
the right values for the currently logged in user and cor- var issues = await session.Query<Issue>()
relation identifier of the session, I’m going to create an .Where(x => x.AssigneeId == userId
implementation of the Marten ISessionFactory interface, as && x.IsOpen)
shown in Listing 1. .OrderBy(x => x.Opened)
.ToListAsync().ConfigureAwait(false);
Lastly, to use the new ISessionFactory, I’ll make that ac-
tive in this code with the BuildSessionsWith<T>() method // Transform data
chained from AddMarten(): return issues.Select(x => new IssueView
{
var builder = WebApplication Title = x.Title,
.CreateBuilder(args); IssueId = x.Id
builder.Host.ConfigureServices(services => }).ToList();
{ }
services.AddMarten(opts =>
{ Honestly, that’s probably good enough for most cases, but
// Marten configuration let’s go through some of the facilities in Marten to poten-
}) tially make that Web service run faster.
// Register our custom session factory
.BuildSessionsWith<TracedSessionFactory>(); Assuming that the issue tracker is going to be a very suc-
}); cessful piece of software helping its users support a prob-
lematic set of products, you should assume that the Issue
document storage table grows very, very large. For opti-
Optimizing ASP.NET Core mizing the Web service method above, the obvious first
Performance with Marten place to start is applying some kind of index against the
Inside the issue tracking system, you probably have a Issue document to make querying on the AssigneeId prop-
simple view somewhere that just shows all the open is- erty faster. In a previous example, you’d added a foreign
sues for a given user. Behind that feature, let’s say that key relationship between the Issue.AssigneeId and the
you’ve got a simple Web service to get a summary of the User document. When you did that, Marten automatically
open issues. In this case, you only care about the issue created a Postgresql index against the Issue.AssigneeId
title and the actual issue ID to help build links on the cli- property in addition to the foreign key constraint. If you
ent side. That gives you this small DTO for the Web service really wanted to, you can fine-tune that index as shown
output: below:

public class IssueView services.AddMarten(opts =>


{ {
public string Title { get; set; } // Other Marten configuration

Listing 1: A custom session factory to incorporate tracing


public class TracedSessionFactory : ISessionFactory public IDocumentSession OpenSession()
{ {
private readonly IDocumentStore _store; var session = _store.LightweightSession();
private readonly HttpContextAccessor _accessor; session.CorrelationId = _accessor
.HttpContext?
public TracedSessionFactory( .TraceIdentifier;
IDocumentStore store,
HttpContextAccessor accessor) session.LastModifiedBy = _accessor
{ .HttpContext?.User?.Identity?.Name;
_store = store;
_accessor = accessor; return session;
} }
}
public IQuerySession QuerySession()
=> _store.QuerySession();

codemag.com Fast Application Persistence with Marten 31


opts.Schema.For<Issue>() .Select(x => new IssueView
// Override the index generated for {
// AssigneeId to use the hash IssueId = x.Id,
// method instead of the Title = x.Title
// default btree })
.ForeignKey<User>( .ToListAsync();
x => x.AssigneeId, }
indexConfiguration: idx =>
{ It’s important to note here that the transformation from
idx.Method = IndexMethod.hash; Issue to IssueView completely happens within Postgresql it-
}) self. That’s helping you return a lot less data over the wire
between Postgresql and the Web server while also cutting
.ForeignKey<User>(x => x.OriginatorId); out an intermediate step of serializing the raw data to the
heavier Issue objects.
});
Now, we .NET developers tend to take LINQ for granted, but
However, if you hadn‘t already added the foreign key rela- after having spent five or six years authoring and helping
tionship through Marten, you could instead use a computed to support the LINQ provider code within Marten, I can tell
index, like so: you that there’s a lot of stuff happening within your average
LINQ query:
services.AddMarten(opts =>
{ • The .NET runtime must build the Expression structure
// Other Marten configuration representing the LINQ query.
opts.Schema.For<Issue>() • In Marten’s case (and this is also true with Entity
// This is a computed index on the Framework Core), the Relinq library is used to pre-
// Issue.AssigneeId property process the LINQ expression into an intermediate
.Index(x => x.AssigneeId); model.
}) • A series of custom visitors are sent down the interme-
diate model and more of the Expression structure to
The computed index is indexing within the stored JSONB figure out how to build a matching SQL command for
data within Postgresql and doesn’t require any other kind of the underlying storage engine.
duplicated field in the table structure. At the cost of some- • There’s quite a bit of string manipulation to get to the
what slower writes, indexing the AssigneeId property makes SQL command.
the LINQ query against the Issue document storage in the • Execute the SQL command and process the results into
controller code above faster. the form that was specified in the original LINQ query.

Next up, let’s eliminate the need to deserialize the Issue Does reading that list kind of make you a little tired? It does
document data and do the in-memory mapping to the Is- me. The point being here that LINQ querying comes with
sueView structure. You can simply do a LINQ Select() trans- some significant performance overhead. That being said, I’ll
form like this: argue until I’m blue in the face that LINQ is one of the very
best features of .NET and a positive differentiator for .NET
[HttpGet("/issues/open/user/{userId}")] versus other platforms.
public Task<IReadOnlyList<IssueView>>
GetOpenIssues( Fortunately, Marten has a feature we call “compiled que-
Guid userId, ries” that lets you have all the good parts of LINQ without
[FromServices] IQuerySession session) incurring the performance overhead. Let’s take the LINQ
{ query above and move that to a compiled query class called
return session.Query<Issue>() OpenIssuesByUser, as shown in Listing 2.
.Where(x => x.AssigneeId == userId
&& x.IsOpen) Moving to the compiled query turns the controller method
.OrderBy(x => x.Opened) into this code:

Listing 2: Compiled query usage for open issues by user id


public class OpenIssuesByUser : ICompiledListQuery<Issue, IssueView> .Select(x => new IssueView
{ {
public OpenIssuesByUser(Guid userId) IssueId = x.Id,
{ Title = x.Title
UserId = userId; });
} }

public Expression<Func<IMartenQueryable<Issue>, IEnumerable<IssueView>>> public Guid UserId { get; set; }


QueryIs() }
{
return q => q
.Where(x => x.AssigneeId == UserId && x.IsOpen)
.OrderBy(x => x.Opened)

32 Fast Application Persistence with Marten codemag.com


[HttpGet("/issues/open/user/{userId}")] like setting the content-type and content-length headers Upsert
public Task<IEnumerable<IssueView>> with the proper values.
GetOpenIssues( As the name suggests, an
Guid userId, There are far more features in Marten than what I’ve shown “upsert” operation means that
[FromServices] IQuerySession session here that can help you wring out more performance in your the database itself determines
) system, but hopefully this is a good start in showing how if it needs to insert a brand-
=> session.QueryAsync(new robust Marten has become. new row or update an existing
row. Postgresql has a nifty
syntax for doing efficient
OpenIssuesByUser(userId));
Why Marten? upserts that we use in Marten
(www.postgresqltutorial.com/
With the compiled query, Marten is generating and com- The Marten library has been in production systems since postgresql-upsert/).
piling code at runtime that already “knows” exactly how the fall of 2016, but both it and Postgresql have advanced
to issue a SQL command to Postgresql for the query and greatly since then. Marten provides all of the developer Although Marten supports
also how to turn those results into exactly the form that productivity advantages of a document database but does explicit Insert() and Update()
the original LINQ query defined. In a system with quite a so without sacrificing transactional integrity. Being just a operations, most people just
bit of traffic, that last change potentially improves perfor- library on top of the Postgresql database, Marten can be choose to use the Store() upsert
mance and scalability overall by eliminating quite a bit of used on all the major cloud hosting options, as well as run- command for its simplicity.
object allocations and CPU-bound overhead by eliminating ning locally for developers inside of Docker containers or
the repetitive LINQ parsing. It’s apt to think of the com- direct installations. Postgresql itself is a very cost-effective
piled query feature in Marten as “stored procedures for database option with wide community support.
LINQ queries.”
Marten itself has a strong community on GitHub (https://
So far, you’ve made the database run the query more effi- github.com/jasperfx/marten) and the Gitter chat room
ciently by applying a database index to the Issue.AssigneeId where you can interact with other users or the Marten core
property you’re querying against, you’ve eliminated some team (https://gitter.im/JasperFx/marten).
unnecessary serialization and in-memory transformation by
using a LINQ Select() transform, and jumped to a compiled Marten started as a document database library with a small,
query approach that eliminates some of the overhead of simple event store feature bolted on the side. Fast forward
LINQ parsing. a few years and Marten is becoming a full featured event
sourcing solution with a document database thrown in too.
There’s still one big piece of performance overhead to elimi- In the follow up article to this one, I’d like to do a deep dive
nate. In that method above, Marten is fetching the exact on Marten’s “event sourcing in a box” feature set.
JSON that you need to send to the Web service client, but
first deserializing that JSON to an enumerable of IssueView  Jeremy D. Miller
objects. Then ASP.NET Core turns right around and serializes 
those objects right back to the exact same JSON and sends
that data down the HTTP response body. Fear not! Marten
has a facility to bypass the unnecessary deserialize/serial-
ize dance.

First though, you need to install the small Marten.AspNet-


Core NuGet:

dotnet add package Marten.AspNetCore

That NuGet is going to add a couple extension methods, in-


cluding the IQuerySession.WriteArray() method you can use
to copy, byte by byte, the JSON data being queried from
Postgresql right down to the HTTP response body without
ever incurring any unnecessary deserialization/serialization
overhead or even wasting CPU cycles on creating a JSON
string object in memory.

public class OpenIssueController : ControllerBase


{
[HttpGet("/issues/open/user/{userId}")]
public Task GetOpenIssues(
Guid userId,
[FromServices] IQuerySession session)
=> session.WriteArray(
new OpenIssuesByUser(userId),
HttpContext);
}

Besides trying to be as performant as possible, the WriteAr-


ray() method above also takes care of HTTP niceties for you,

codemag.com Fast Application Persistence with Marten 33


ONLINE QUICK ID 2205061

Using Cosmos DB in
.NET Core Projects
If you’re a .NET developer, like me, you’ve likely been used to storing your data as relational data even in cases when it wasn’t the
most logical way to store state. Changing our thinking about relational and non-relational stores has been going on for some
time now. If you’re building Azure hosted projects and have a need for document-based storage, Cosmos DB is a great way

to gain high-availability and redundancy. In this article, I’ll think about data in different way. Mechanisms like Object
show you what Cosmos DB is and how you can use the SDK to Relational Mappers (ORM) have tried to hide this difference
store, search, and update your own documents in the cloud. from developers. Understanding the basics of relational da-
tabases like schema, constraints, keys, transactions, and
isolation level are often lost for the sake of quickly getting
What Is Cosmos DB? up to speed and getting projects completed. This has left
In a world where NoSQL databases are a dime a dozen, Cos- many developers holding onto relational databases as their
mos DB is a different beast. Although at its core, it’s just a one and only way to store data.
document database, Cosmos DB is a hosted data platform
Shawn Wildermuth for solving problems of scale and availability. Ultimately, it’s
shawn@wildermuth.com a document database as a service that supports low latency, Using Cosmos DB with .NET Core
wildermuth.com high availability, and geolocation. With features like SLA- Although Cosmos DB provides several mechanisms to con-
@ShawnWildermuth backed availability and enterprise-level security, small and nect to the service (listed above), this article focuses on
large businesses can rely on the Azure deployed service. accessing the service with a .NET Core project. The .NET Core
Shawn Wildermuth has
been tinkering with com- Cosmos DB library supports documents to be stored in Cos-
puters and software since Cosmos DB is accessible through a variety of APIs and lan- mos DB.
he got a Vic-20 back in the guage integration. In general, you can use the following
early ’80s. As a Microsoft ways to interact with Cosmos DB: What do I mean by documents? If you’re coming to Cosmos
MVP since 2003, he’s also DB from a traditional relational database, you’re used to
involved with Microsoft • SQL API (via Libraries) thinking about data in a two-dimensional matrix (that is, a
as an ASP.NET Insider and • MongoDB wrapper table). Tables store data in rows made up of columns that
ClientDev Insider. He’s • Cassandra wrapper are typically (but not always) primitives (such as strings,
the author of over twenty • Goblin wrapper numbers, etc.). In order to include complex objects, tables
Pluralsight courses, written • Table API are related to each other through foreign keys, as seen in
eight books, an interna- • Entity Framework Core provider Figure 1.
tional conference speaker,
and one of the Wilder If you’re already using MongoDB, Cassandra, or Goblin, you In document stores, the data is stored as a single entity.
Minds. You can reach can use Cosmos DB as a drop-in replacement via these APIs. Typically, they are atomic but because they can store more
him at his blog at Essentially, CosmosDB has compatible APIs to support using complex objects, the type of data you can store is more
http://wildermuth.com. a connection to Cosmos DB. expressive. For many solutions, document databases make
He’s also making his first, more sense. This isn’t a matter of one model being better
feature-length documentary
Cosmos DB also supports a Table API, which can be a good than the other, rather that for some situations, document
about software developers
replacement for Azure Table Storage. If you’re thinking that databases make more sense.
today called “Hello World:
you came here to replace your relational database (e.g.,
The Film.” You can see
more about it at SQL Server, Postgres, etc.), that’s not really what it’s about. It’s thought that because of the object orientation of many
http://helloworldfilm.com. Let’s talk about NoSQL versus relational data stores first. languages we use, document databases make more sense;
It’s easy to think about data storage as relational databases but that’s a bad reason to use a document database. In-
first because that’s likely many developers’ first experience stead, you should look at the use of the data. When you’re
with storing data. storing something like customers and orders, relational
could make more sense, as those relationships are impor-
Although many developers (including .NET developers) think tant to enforce in the database server. Being able to reason
about the world in terms of objects, relational data stores about the kinds of data stored often makes relational stores
more logical.

On the other hand, some data is reasonably atomic. Some-


thing like a stock transaction isn’t something that you need
to break apart and reason over. In this case, something like
a document makes more sense. In some solutions, mixing
the two makes a ton of sense. For example, earlier in my
career, I was responsible for storing medical papers and
making them searchable. For the data that was important
to quickly index, I built a relational store. Under that re-
Figure 1: Typical relational model lational store, I stored the research papers themselves as

34 Using Cosmos DB in .NET Core Projects codemag.com


documents so they could be retrieved as atomic objects. The mos DB works with. This means scaling up to high loads
benefit was that the document store wasn’t tuned for writ- as well as geographically locating your data close to where
ing but was tuned for reading. Because the papers rarely it’s going to be used. That’s all well and good, but for this
changed, it made more sense to marry the two. exercise, you should use a local emulator to do your primary
development. To get the emulator, visit https://aka.ms/cos-
A lot has changed in document stores since then. Now, you mosdb-emulator and install the emulator. Once installed, it
can not only store your documents, but you can index them opens up a webpage to show the emulator working. If this
for searchability and scale them out like never before. That’s doesn’t happen, look for the icon in the system tray and
where Cosmos DB comes in. Let’s dig in. pick “Open Data Explorer,” as seen in Figure 2.

The Emulator Currently, the emulator only works on Windows, but you can
Azure’s Cosmos DB is a hosted service. This service is meant connect to it from Mac environments (see https://shawn.ink/
to be used so you can gain from the sheer scale that Cos- cosmosdb-on-mac for more information).

The main Web page of the emulator shows you the connec-
tion information you can use to connect to the Cosmos DB
instance. For this article, I’ll be using the connection string,
as seen in Figure 3.

I’ll come back to the Explorer tab in this UI as soon as I get


into the code.

Enough set up. Let’s connect to Cosmos DB with ASP.NET Core.

Getting Started
As I stated earlier, there are multiple ways to access Cosmos
DB, but for this article, I’m focusing on the Azure.Cosmos
NuGet package. The first thing is to add the package to your
project, as seen in Figure 4. Note that as of the writing of
this article, the v4 of this package is in preview, so you’ll
need to check Include prerelease to see the latest version
Figure 2: Azure Cosmos DB emulator of the package.

Figure 3: Connection string for Cosmos DB

Figure 4: Azure.Cosmos package

codemag.com Using Cosmos DB in .NET Core Projects 35


Now that you’ve included the package, you’ll need that to create an instance of the CosmosClient. This can be done
connection string from Figure 3. Add it to the Connection- by just passing in the connection string to a new instance
Strings settings in appsettings.json in your project: of the client, like so:

{ var connString =
“Logging”: { _config.GetConnectionString(“Cosmos”);
“LogLevel”: { _client = CosmosClient(connString);
“Default”: “Information”,
“Microsoft.AspNetCore”: “Warning” You first get the connection string from the IConfiguration
} object that you injected into the constructor, then just cre-
}, ate the new client as necessary.
“AllowedHosts”: “*”,
“ConnectionStrings”: { Creating Databases and Collections
“Cosmos”: “YOUR CONNECTION STRING” In Cosmos DB, a database is a container for collections of
} data. These two terms are just part of the hierarchy, as seen
} in Figure 5.

Of course, you can store this in any way you see fit, but Before you can store documents, you need a database and
for my purposes, I’ll include it here. You’re now ready to a container. You could create these in the user interface of
start working with it. For this example, I’m going to use a the emulator, but I suggest you do it with code. First, you
repository pattern to provide access to the data. Start with need to define a database name and a name for the con-
a pretty simple class: tainer for the data:

public class TelemetryRepository private const string DBNAME = “TelemetryDb”;


{ private const string CONTAINERNAME = “Telemetry”;
private readonly IConfiguration _config;
private readonly There are two steps to setting up the data store: creat-
ILogger<TelemetryRepository> _logger; ing the database and the container. The database is just a
private readonly named object that can contain one or more containers. So
IHostEnvironment _enviroment; creating the database only requires a database name:

var result = await _client


public TelemetryRepository( .CreateDatabaseIfNotExistsAsync(DBNAME);
IConfiguration config,
ILogger<TelemetryRepository> logger, Containers are used to store specific types of documents.
IHostEnvironment enviroment) Unlike a relational data store, the structure of the document
{ isn’t enforced except for two key pieces of information:
_config = config;
_logger = logger; • A unique identifier: A primary key to uniquely ID the
_enviroment = enviroment; document, typically a string
} • A partition key: A shared identifier to logically group
documents
}
The primary key (or ID property) is merely a unique string
I’m going to provide this by adding it to ASP.NET Core’s ser- that identifies a document. The partition key requires more
vice provider (using top-level statements): consideration. A partition key allows Cosmos DB to logically
group together documents that might be related. For ex-
var builder = WebApplication.CreateBuilder(args);

builder.Services.AddScoped<TelemetryRepository>();

var app = builder.Build();

Finally, in my API call, I’ll just inject it in the Minimal API (see
my prior article for more information on that - https://www.
codemag.com/Article/2201081/Minimal-APIs-in-.NET-6):

app.MapGet(“/meter/telemetry”,
async (TelemetryRepository repo) =>
{
return Results.Ok(await repo.GetAll());
});

Now you’re ready to implement the calls to Cosmos. Of


course, at this point you don’t have any data, but let’s start
by connecting to and creating the database. First, you need Figure 5: The Cosmos DB hierarchy

36 Using Cosmos DB in .NET Core Projects codemag.com


ample, you might have a container for invoices and use the The portal to accessing data in Cosmos DB is the container
customer ID as a partition key. This partition key is used for object. You need to get the container. You can do this by
maintaining logical groups of documents together. It’s re- wrapping access in a method so you can call the Initialize-
quired. There’s a magic to the partition key in that it speeds Database call to ensure that the database has been created
up queries if the different objects are in the same partition and returning the container object:
key. Although you could use any information in the docu-
ment, try to use one that creates a finite number of parti- async Task<CosmosContainer> GetContainer()
tions. It’s easy to make the mistake of making the ID also {
the partition key, which then makes a partition for every await InitializeDatabaseAsync();
document. return _client.GetContainer(DBNAME,
CONTAINERNAME);
In this example, the document consists of a Telemetry ob- }
ject and related Reading objects:
With all of that in place, you can move forward and start to
public class Telemetry work with data.
{
public string Id { get; set; } =
Creating Documents
Guid.NewGuid().ToString();
The next step to using Cosmos DB is to start storing docu-
public string MonitorId { get; set; } = “”;
ments. Cosmos DB stores objects as JSON, which means
public ICollection<Reading>? Readings
that you need to think about the objects you’re storing to
{
make sure that they’re a real hierarchy. The actual storage
get;
is pretty simple:
set;
}
public string Notes { get; set; } = “”; var container = await GetContainer();
public MonitorStatus MonitorStatus { get; set; } var result = await container
} .CreateItemAsync(model,
new PartitionKey(model.MonitorId));
public class Reading
{ The CreateItemAsync creates a new instance of an object in
public DateTime ReadingTime { get; set; } = the data store. But if you add your object, you might get a
DateTime.UtcNow; failure (via an exception). You need to have a valid identifier
public double WindSpeed { get; set; } and partition key for the new object. Calling CreateItemA-
public double Temperature { get; set; } sync lets you specify the partition key, but for the ID, you
public double Altitude { get; set; } need to give it a little help.
}
In v4 of the Cosmos DB SDK, it uses the .NET Core System.
In this example, the MonitorId is the partition key so that Json for serialization. Cosmos DB expects that the identifier
when you need to get all the readings for a particular Moni- field is called id. Unfortunately, by default, System.Json se-
tor, the query should be more efficient. This doesn’t prevent rializes objects maintaining the property case (e.g., Pascal
you from searching across partitions; it’s just a hint of how case in my example). So you need to use an annotation to
you want to shard the data store. Once defined, it’s mostly change the case of the ID field:
invisible in day-to-day development. With that knowledge,
you can create the container: public class Telemetry
{
await result.Database // Uses System.Json by default in v4
.CreateContainerIfNotExistsAsync( [JsonPropertyName(“id”)]
CONTAINERNAME, public string Id { get; set; } =
“/MonitorId”); Guid.NewGuid().ToString();
public string MonitorId { get; set; } = “”;
With that information set, you can create a method that’s public ICollection<Reading>? Readings
called to ensure that both the database and collection exist {
(and only run it in development): get;
set;
async Task InitializeDatabaseAsync() }
{ public string Notes { get; set; } = “”;
if (_enviroment.IsDevelopment()) public MonitorStatus MonitorStatus { get; set; }
{ }
var result = await _client
.CreateDatabaseIfNotExistsAsync(DBNAME); With that in place, you can store the object with CreateIt-
await result.Database emAsync. To check this, you can go to the emulator’s Ex-
.CreateContainerIfNotExistsAsync( plorer tab to look at the data in the database, as seen in
CONTAINERNAME, Figure 6.
“/MonitorId”);
} You now have data stored, but how do you work with it?
} Let’s look at how to read documents next.

codemag.com Using Cosmos DB in .NET Core Projects 37


Figure 6: The Explorer tab in the Emulator

Reading Documents var query = new QueryDefinition(sql)


You can read individual items out of Cosmos DB but you’ll .WithParameter("@id", id);
need both the identifier and the partition key:
Once you have the query definition, you can create an itera-
var response = await container tor to allow you to execute the query:
.ReadItemAsync<Telemetry>(
id, var container = await GetContainer();
new PartitionKey(monitorId));
var iterator = container
if (response.GetRawResponse().Status == 200) .GetItemQueryIterator<Telemetry>(query);
{
return response.Value; This assumes that you’ll have more than one result, but be-
} cause the query is for a single result, you can just get an
enumerator and get the first object:
Although this is possible, usually you’ll be querying for doc-
uments instead. The Cosmos DB SDK supports its SQL API. var enumerator = iterator.GetAsyncEnumerator();
So you can do the same with a simple query:
if (await enumerator.MoveNextAsync())
SELECT * {
FROM c return enumerator.Current;
WHERE c.id = }
"17704354-fdb1-4303-bcc4-6f041bae5710"
The enumerator.Current will be that first element (if the
This syntax is pretty close to standard SQL that you might MoveNextAsync succeeds). If your query will return mul-
be familiar with in relational databases, although you’ll no- tiple results, you can use an async enumerator, like so:
tice that the “FROM” points at an unnamed object. The main
stored documents don’t have a name, so you just alias it var sql = @"SELECT VALUE c
with a “c” (or other name) in this example. You can test out FROM c
the query in the emulator page, as seen in Figure 7. WHERE c.MonitorId = @monitorId";

To use queries in the Cosmos DB SDK, you can create a var query = new QueryDefinition(sql)
QueryDefinition by using the SQL text. Notice that in this .WithParameter(“@monitorId”, monitorId);
example, I’m using a parameter (Cosmos DB, just like any
other SQL, shouldn’t use concatenated strings—please pa- var results = new List<Telemetry>();
rameterize your queries):
var iterator = container
var sql = $"SELECT * FROM c WHERE c.id = @id"; .GetItemQueryIterator<Telemetry>(query);

38 Using Cosmos DB in .NET Core Projects codemag.com


Figure 7: Testing queries in the Emulator

await foreach (Telemetry result in iterator) var result = await


{ container.UpsertItemAsync(model,
results.Add(result); new PartitionKey(model.MonitorId));
}
Unless you don’t know whether an item exists, use create
return results; and replace to be specific about the operation that you ex-
pect.
In this case, you’re using the await foreach to walk through
the results and add them to a list to return. Lastly, you can Deleting Documents
query against the complete object using a dot-syntax and Finally, you can also delete documents with a similar call to
JOINs, like this query: updating and creating documents:

SELECT VALUE c var result = await


FROM c container.DeleteItemAsync<Telemetry>(item.Id,
JOIN r IN c.Readings new PartitionKey(item.MonitorId));
WHERE r.WindSpeed > 10
This removes the object completely from the data store.
In this case, you’re using a JOIN to execute the where clause
against the collection of readings, but still just returning the
entire object by using VALUE c. The VALUE keyword tells Cos- What About Entity Framework?
mos DB’s SDK that you want the entire value, not just part of I’ve been talking about the Cosmos DB SQL SDK, but there’s
it. You can still use production in SELECT as well, but I think also support for an Entity Framework Core provider for Cos-
these examples will get you started with most use-cases. mos DB. Although this is a valid way of connecting with the
Cosmos DB, mapping your documents to entities requires
Updating Documents some proficiency with the Entity Framework mappings. If
Now that you can read and create documents, you’re likely you’re already familiar and comfortable with Entity Frame-
not surprised that you need to be able to update documents work Core, this might be the simplest path to using Cosmos
as well. Although Cosmos DB supports partial updates DB. In my very informal survey of Cosmos DB users (a Twitter
(through a Patch mechanism that you can read about here: poll), the majority of users seemed to use the Cosmos DB
https://shawnl.ink/cosmos-db-partialupdate), often, what SDK, and a smaller set of users used the Entity Framework
update really means is to replace the old document. The API provider. Either method is usable and competent.
for this is straightforward, as it takes the object to be saved
and the ID of the old document:
Where Are We?
public async Task<Telemetry> Cosmos DB represents a different way of thinking about
Update(Telemetry model) data in modern .NET applications. Treating your data as
{ documents is a use-case that you need to consider when
var container = await GetContainer(); you architect solutions. When you need high availability and
var result = await redundancy, using the Cosmos DB is a smart strategy for
container.ReplaceItemAsync(model, model.Id); many Azure-based projects. Hopefully this article has shown
return result.Value; you how to use it and that it’s fairly straightforward and
} easy to use.

There is also support for Upsert, which creates the object if  Shawn Wildermuth
it doesn’t exist and replaces it if it does, like so: 

codemag.com Using Cosmos DB in .NET Core Projects 39


ONLINE QUICK ID 2205071

Building MVC Applications


in PHP Laravel: Part 1
This article is part one of a new series on building MVC (Model View Controller) applications in PHP Laravel. In this first part,
I’ll start by looking at what MVC means and its concepts. Then, I’ll tackle the M in MVC by explaining how PHP Laravel
implement the Model in MVC applications.

MVC in a Nutshell client for a specific View, it coordinates with the Model com-
ponent to query for data (or update data), then it decides
Model-View-Controller (MVC) is both a design pattern and which View component to return, and finally, it packages the
architecture pattern. It’s seen as more of an architectural View together with the related data into a single response.
pattern, as it tries to solve these problems in the applica-
tion and affects the application entirely. Design patterns One component often overlooked or perceived as part of the
are limited to solving a specific technical problem. Controller is the Routing Engine. It’s the brains behind the
MVC pattern and one of the most important components in
MVC divides an application into three major logical sections: MVC that initially receives the request from the client and
Bilal Haidar allocates which Controller is going to handle the request.
bhaidar@gmail.com • Model
https://www.bhaidar.dev • View Figure 1 shows all of the components, together with their
@bhaidar • Controller relationships, that make up the MVC pattern.
Bilal Haidar is an
accomplished author, The Model component governs and controls the application I can explain Figure 1 as follows:
Microsoft MVP of 10 years, database(s). It’s the only component in MVC that can in-
ASP.NET Insider, and has teract with the database, execute queries, retrieve, update, • The browser (client) requests a page (view).
been writing for CODE delete, and create data. Not only that, but it’s also responsi- • The router engine, living inside the application, re-
Magazine since 2007.  ble for guaranteeing the evolution of the database structure ceives the request.
from one stage to another by maintaining a set of database • The router engine runs an algorithm to pick up a sin-
With 15 years of extensive migrations. The Model responds to instructions coming from gle Controller to handle the request.
experience in Web develop- the Controller to perform certain actions in the database. • The Controller decides on the View to return and com-
ment, Bilal is an expert in municates with the Model to retrieve/store any data
providing enterprise Web The View component generates and renders the user inter- and sends a response back to the browser.
solutions. face (UI) of the application. It’s made up of HTML/CSS and • The Model communicates with the database, as need-
possibly JavaScript. It receives the data from the Controller, ed.
He works at Consolidated
which has received the data from the Model. It merges the • The View renders the page (view) requested by the
Contractors Company in
data with the HTML structure to generate the UI. browser.
Athens, Greece as a full-
stack senior developer.
The Controller component acts as a mediator between the Now that you know how MVC works, let’s see how PHP Lara-
Bilal offers technical View and Model components. It receives a request from the vel implements MVC.
consultancy for a variety
of technologies including
Nest JS, Angular, Vue JS,
JavaScript and TypeScript. 

Figure 1: MVC Architecture


Diagram

40 Building MVC Applications in PHP Laravel: Part 1 codemag.com


How PHP Laravel Implements MVC Or
Laravel is a PHP-based Web framework that’s fully based
on the MVC architecture and much more. The goal is to get php artisan make:model Post -mf
started building PHP projects easily using modern tools and
techniques. Laravel creates three files at this stage:

To understand how PHP Laravel implements MVC, I’ll go • The \app\Models\Post.php Model class
through a typical Laravel project structure and show you how • An anonymous migration file for the Posts table lo-
the Laravel team bakes the MVC concepts into the framework. cated under \database\migrations\ directory
• The \database\factories\PostFactory.php Factory
Let’s get started by creating a new PHP Laravel project lo- class
cally. To avoid repetition, I’ll point you to a recent article I
published in CODE Magazine Nov/Dec 2021, where I show The command generates the Post.php model file:
you a step-by-step guide on creating a Laravel application.
You can follow this article here: https://www.codemag. namespace App\Models;
com/Article/2111071/Beginner%E2%80%99s-Guide-to-De-
ploying-PHP-Laravel-on-the-Google-Cloud-Platform. use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
The latest official version of Laravel is v9.x.
class Post extends Model
Model {
The first component is the Model or M of the MVC. The Model use HasFactory;
plays the main role of allowing the application to communi- }
cate with the back-end database. Laravel includes Eloquent.
Eloquent is an object-relational mapper (ORM) that makes it Notice how the Post class extends the base class Model.
easy to communicate with the back-end database. This is how the Post class inherits all methods and prop-
erties from the base class and allows your application to
In Laravel, you create a Model class for each and every da- interact with the database table posts. Also, the Post class
tabase table. The Model class allows you to interact with the uses the HasFactory trait. This is needed to link Post and
database table to create, update, query, and delete data in PostFactory classes.
the database.

By default, when you create a new Laravel application, it in-


cludes a user model. This model maps to the Users database Traits in PHP are one way to
table. It stores all user records in the application.
reuse and share a single chunk
of code. You can read more about
PHP traits here:
By default, Laravel creates
https://www.php.net/manual/en/
the user model class.
language.oop5.traits.php
The User model represents a single
user record in the application.
You can read more about Laravel Eloquent Model here
(https://laravel.com/docs/9.x/eloquent#generating-mod-
Let’s create your first Model class to represent a database table el-classes).
of Posts. You’ll create a Laravel Migration as well. (see https://
laravel.com/docs/9.x/migrations for the documentation). Configure Model Migration
Let’s have a look at the migration file that the command
A migration file gives you the chance to declaratively decide created. Listing 1 shows the entire source code for this mi-
what columns the database table should have. Laravel uses gration.
this migration file to create/update the database table. By
default, Laravel comes with a few migration files to create Laravel uses the down() method when rolling back a
and configure user database tables. migration(s), and it uses the up() method when running
the migration and setting up the database structure.
You’ll also be creating a Model Factory class. You’ll use this to
easily create model records in the database. It’s helpful when When you run this migration, Laravel creates a Posts
you want to seed some initial data into the database tables. Also, database table with three columns:
you’ll rely heavily on factories when writing unit or feature tests.
• ID (unsigned big integer)
Create a Model • created_at (timestamp)
Run the following command to create a Model, Migration, • updated_at (timestamp)
and Model Factory for the Posts database table.
Let’s add a few columns to the Posts database table. Adjust
sail artisan make:model Post -mf the up() method to look similar to the one in Listing 2.

codemag.com Building MVC Applications in PHP Laravel: Part 1 41


Listing 1: The Post Model migration file You can read more about Eloquent Migrations here (https://
laravel.com/docs/9.x/migrations).
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema; Model Relationships
One of the remarkable features of a relational database is to con-
return new class extends Migration nect database tables together through database relationships.
{ There are a handful of relationships such as one-to-many, many-
/**
* Run the migrations. to-many, and others. Here’s a detailed guide explaining all types
* of relationships in a relational database (https://condor.depaul.
* @return void edu/gandrus/240IT/accesspages/relationships.htm).
*/
public function up()
{ The Eloquent ORM offers you the same experience that any
Schema::create(‘posts’, relational database offers. With Eloquent, you can define
function (Blueprint $table) { relationships and link models to each other.
$table->id();
$table->timestamps();
}
In the Create Model Migration section, you created a mi-
); gration file to create the Posts database table. That table
} had the user_id as foreign key. It’s this field that creates a
one-to-many relationship between the User and Post mod-
/**
* Reverse the migrations.
els. Every user can have one or more Posts.
*
* @return void To define a relationship in Laravel Eloquent, two steps are
*/ required:
public function down()
{
Schema::dropIfExists(‘posts’); • At the Migration level, add all necessary fields that are
} used to link the database tables together. This is done
}; already in this example.
• Define a relationship function at the model level. This
is the topic of this section.

Listing 2: Adjusted up() method Locate and open the Post.php file and add the following
public function up() relationship:
{
Schema::create(‘posts’, public function user():
static function (Blueprint $table) { \Illuminate\Database\Eloquent\Relations\BelongsTo
$table->id();
$table->string(‘slug’); {
$table->string(‘title’); return $this->belongsTo(User::class);
$table->string(‘body’); }
$table->unsignedBigInteger(‘user_id’);
$table->date(‘published_at’)
->useCurrent();
$table->timestamps(); The function user() represents the relationship between the
two models. A Post belongsTo a User.
$table->foreign(‘user_id’)
->references(‘id’) The inverse of this relationship goes inside the User.php
->on(‘users’);
} file. Let’s add the following relationship:
);
} public function posts():
\Illuminate\Database\Eloquent\Relations\HasMany
{
return $this->hasMany(Post::class);
I’ve added the following columns: }

• Slug: The slug of the blog post. A slug is a user-friend-


ly and URL valid name of a Post. You can read more A User hasMany Posts.
about it here (https://is.gd/D9qEao).
• Title: The title of the blog post. I’ll keep it at this level with regard to defining and exploring
• Body: The blog post content. relationships in Laravel Eloquent. I advise you to check the
• user_id: The user who created this blog post. official documentation to learn all about Eloquent Relation-
• published_at: A timestamp column referring to the ships (https://laravel.com/docs/9.x/eloquent-relationships#
date the blog post is published on. By default, it gets one-to-many).
the date the post was created on.
• Timestamps: Adds two new columns on the database Mass Assignment
table named: created_at and updated_at. Mass assignment (https://laravel.com/docs/9.x/eloquent#mass-
assignment) is a concept in the Laravel framework that allows
Finally, I make the user_id column to be a foreign key refer- you to create a new Model object in one statement instead of
ring to the User ID column in the Users table. multiple statements.

42 Building MVC Applications in PHP Laravel: Part 1 codemag.com


Instead of creating a new Post object as follows: protected $guarded = [
‘slug’,
$post = new Post(); ‘user_id’
$post->title = 'My First Post'; ];
$post->body = 'Body of the blog post ...';
$post->slug = 'my-first-post'; I prefer setting the allowed columns to know exactly what
$post->user_id = 1; columns I’m setting via the mass assignment. I prefer the
$fillable property.
You can create the same object using mass assignment:
You can read more about mass assignment here (https://
$post = App\Models\Post::create([ laravel.com/docs/9.x/eloquent#mass-assignment).
‘title’ => ‘My First Post’,
‘body’ => ‘Body of the blog post ...’, Model Factories
‘slug’ => ‘my-first-post’, A model factory in Laravel allows you to have fake models.
‘user_id’ => 1 You can extensively use this feature:
]);
• At the initial phases of the app while you’re still build-
To enable mass assignment, you need to use one of the two ing it, without having enough UIs to cover all features
methods listed here: and aspects of the app. You use model factories to gener-
ate dummy data to use and display on the screens while
• Specify the allowed columns that can be used inside building them. In Laravel, use Database Seeders to gener-
the create() method. You can define the $fillable ate dummy data for all the models in the application. This
property on the Post Model class and include the al- way, you can populate all screens with dummy data while
lowed columns to fill with the mass assignment. still developing them. You can read more about Database
Seeders here (https://laravel.com/docs/9.x/seeding).
protected $fillable = [ • When writing Unit and Feature tests for your appli-
‘title’, cation, you will depend solely on model factories to
‘body’, generate test data.
‘slug’,
‘user_id’, When you create the Post model, you append the -f flag into
‘published_at’ the artisan command to create a model. This flag instructs
]; the command to generate an empty Factory class corre-
sponding to the model being created.
• Specify the columns that are not allowed to be used
with the mass assignment. You can define the $guard- Listing 3 shows the entire source code for the Post Factory class.
ed property on the Post Model class and include the
not-allowed columns to fill with the mass assignment. The factory class is empty, and you need to fill it with some
fake data. To do so, locate the definition() method and
construct a valid Post model. Listing 4 shows the source
Listing 3: PostFactory class code for constructing a fake Post model.
class PostFactory extends Factory
{
The Factory class defines the $faker property. Laravel makes
/** use of the FakerPHP Faker package (https://github.com/
* Define the model's default state. FakerPHP/Faker).
*
* @return array<string, mixed>
*/ The definition() method constructs and returns a fake in-
public function definition() stance of the Post model using the $faker property to gen-
{ erate things like Post title, body, and others.
return [
//
]; In addition, the user_id field is assigned to the ID of a newly
} created User object.
}
You can do crazy stuff with Model Factories. Check the full
documentation for more information here (https://laravel.
Listing 4: Post model factory com/docs/9.x/database-testing#generating-factories).
public function definition()
{ Configure App with SQLite
$title = $this->faker->realText(20); Let’s run the migration you have and set up the back-end
$title = $this->faker->realText(50); database. Before running the migration, you need to con-
return array(
‘title’ => $title, figure the database connection string.
‘body’ => $this->faker->text(100),
‘slug’ => Str::slug($title), Laravel can work with multiple database engines including and not
‘user_id’ => User::factory()->create()->id,
‘published_at’ => Carbon::now(),
limited to MySQL, Microsoft SQL Server, PostgreSQL, and SQLite.
);
); For now, let’s use SQLite to get started (https://www.sqlite.
} org/index.html). You can switch anytime.

44 Building MVC Applications in PHP Laravel: Part 1 codemag.com


Figure 2: Post database table structure

To get started, open the . env environment file and remove


the following section: ADVERTISERS INDEX

DB_CONNECTION=mysql
DB_HOST=mysql Advertisers Index
DB_PORT=3306
DB_DATABASE=laravel_mvc_app CODE Consulting
DB_USERNAME=sail www.codemag.com/code 7
DB_PASSWORD=password
CODE Consulting
Add the following section instead: www.codemag.com/onehourconsulting 75
CODE Legacy Modernize
DB_CONNECTION=sqlite
www.codemag.com/modernize 49
DB_DATABASE=/var/www/html/database/database.sqlite
CODE Legacy Beach
The reason for this /var/www/html is that you’re using www.codemag.com/modernize 76
Laravel Sail (https://laravel.com/docs/9.x/sail).
Component Source
www.componentsource.com/compare 65
Now, inside the /database folder at the root of the applica- Advertising Sales:
tion, create a new empty file named database.sqlite. DevIntersection Tammy Ferguson
832-717-4445 ext 26
www.devintersection.com 2 tammy@codemag.com
And that’s it!
dtSearch
www.dtSearch.com 11
Eloquent Migrations
Switch to the terminal and run the following command to Knowbility
migrate the database. www.knowbility.org/AIR 43

sail artisan migrate LEAD Technologies


www.leadtools.com 5
Or Live on Maui
www.live-on-maui.com 55
php artisan migrate

The command runs all of the migration files and creates the This listing is provided as a courtesy
necessary database tables and objects. to our readers and advertisers.
The publisher assumes no responsibi-
In this case, two tables in the database are created: Users lity for errors or omissions.
and Posts. Figure 2 shows the Posts table structure.

codemag.com Building MVC Applications in PHP Laravel: Part 1 45


Now that you’ve created the tables in the database, you can Tinker allows you to run Laravel code inside a CLI (command
test creating and retrieving Posts’ data using Laravel Tinker line interface).
(https://laravel.com/docs/9.x/artisan - tinker).
Let’s create a User and Post model.
Laravel Tinker
Laravel Tinker is a powerful REPL for the Laravel framework, pow- Run the following command to create five users:
ered by the PsySH package. This package comes with every Lara-
vel application, so there’s no need to install anything additional. User::factory()->count(5)->create()

To connect to Laravel Tinker, run the following command: You’re using the User Factory (that ships with any new Lara-
vel application) to create five User records using fake data.
sail artisan tinker Figure 4 shows the result of running the command inside
Tinker.
Or
The statement runs and displays the results of creating the
php artisan tinker five User records.

Figure 3 shows the tinker up and running. Let’s create a new Post record, again using the Post Factory.

Figure 3: Tinker running

Figure 4: Creating User records inside Tinker

46 Title article codemag.com


Figure 5: Creating a Post record inside Tinker

Run the following command: You can extend this property by adding more columns to
your Model.
Post::factory()->create()
However, Eloquent offers a more generic way of defining
Figure 5 shows the result of running the command inside such conversions. It allows you to define the $casts prop-
Tinker. erty on your Models and decide on the conversion. For ex-
ample, here, you’re defining a new cast to date:
Inside Tinker, you can run any Eloquent statement that
you’d usually run inside a Controller, as you’ll see soon. protected $casts = [
‘published_at’ => ‘date’
For that, let’s try to query for all Post records in the data- ];
base using the following Eloquent query:
This works great! You can read more about casts in Laravel here
Post::query() (https://laravel.com/docs/9.x/eloquent-mutators#attribute-
->with(‘user’) casting).
->where(‘id’, ‘>’, 1)
->get() I tend to enjoy the flexibility and more control that acces-
sors and mutators in Laravel give me. Let’s have a look.
The query retrieves all Post records with an ID > 1. It also
makes use of another Eloquent feature, eager loading, to Let’s define an accessor and mutator, in the Post.php, to
load the related User record and not only the user_id column. store the published_at column in a specific date format and
retrieve it in the same or some other date format.
Figure 6 shows the query results inside Tinker:
public function publishedAt(): Attribute
Notice that not only the user_id is returned but also an- {
other property named user that contains the entire user return Attribute::make(
record. You can learn more about the powerful Eloquent ea- get: static fn ($value) =>
ger loading here (https://laravel.com/docs/9.x/eloquent- Carbon::parse($value)?->format(‘Y-m-d’),
relationships - eager-loading). set: static fn ($value) =>
Carbon::parse($value)?->format(‘Y-m-d’)
That’s all for the Artisan Tinker for now! );
}
Casting Columns
Eloquent has powerful and hidden gems that are baked into This is the new format for writing accessors and mutators
the framework. For instance, by default, Eloquent converts in Laravel 9. You define a new function using the camelCase
the timestamps columns created_at and updated_at to in- version of the original column name. This function should
stances of Carbon (https://carbon.nesbot.com/docs/). return an Attribute instance. 

Laravel defines a property named $dates on the base Model An accessor and mutator can define only the accessor,
class that specifies which columns should be handled as dates. only the mutator, or both. An accessor is defined by the
get() function and the mutator is defined by the set()
protected $dates = [ function.
‘created_at’,
‘updated_at’, The Attribute::make() function takes two parameters: the
‘deleted_at’ get() and set() functions. You can pass one of them, or both
]; of them, depending on the use case.

codemag.com Title article 47


Figure 6: Query results inside Tinker

The Get function is called the accessor (https://laravel. is retrieved, it will maintain its format. In this case, the
com/docs/9.x/eloquent-mutators#accessors-and-muta- get() accessor is redundant. I use it when I want to display
tors). You use this function to decide what the value of this the field in a different format than the one stored in the
column will look like when retrieved and accessed.  database. For the sake of this demonstration, I use both
to let you know that both accessors and mutators exist
The Set function is called the mutator (https://laravel.com/ in Laravel.
docs/9.x/eloquent-mutators#accessors-and-mutators).
You can use this function to do your conversion logic before That was a brief overview of the models in Laravel MVC. You
Laravel saves this model. can see that the Eloquent ORM is big and powerful.

Conclusion
Laravel makes use of Carbon PHP Laravel not only supports MVC architecture, but it also
for dates everywhere in adds many productivity tools and concepts that make Web
development in Laravel a breeze!
the framework source code.
This is just the beginning of a detailed series of articles
covering Web development with PHP Laravel. Now that you
know the M of MVC, next time you will learn about the C and
In this case, you’re parsing the published_at field to a Car- V! Stay tuned to discover more with PHP Laravel.
bon instance and then formatting it as YYYY-MM-DD. Even-
tually, that’s how it will be stored in the database without  Bilal Haidar
the Time factor of the Date field. Similarly, when the value 

48 Building MVC Applications in PHP Laravel: Part 1 codemag.com


shu
tter
stoc
k/A
run
as G
aba
lis

TIME TO
MODERNIZE YOUR
OLD SOFTWARE?
Is your business being held back by outdated software? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • info@codemag.com

codemag.com Title article 49


ONLINE QUICK ID 2205081

Implementing
Face Recognition
Using Deep Learning and
Support Vector Machines
One of the most exciting features of artificial intelligence (AI) is undoubtedly face recognition.
Research in face recognition started as early as in the 1960s, when early pioneers in the field
measured the distances of the various “landmarks” of the face, such as eyes, mouth, and nose, and

50 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
then computed the various distances in order to determine • First, a set of positive images (images of faces) and a
a person’s identity. The work of early researchers was ham- set of negative images (images with faces) are used to
pered by the limitations of the technology of the day. It train the classifier.
wasn’t until the late 1980s that we saw the potential of face • You then extract the features from the images. Figure
recognition as a business need. And today, due to the tech- 1 shows some of the features that are extracted from
nological advances in computing power, face recognition is images containing faces.
gaining popularity and can be performed easily, even from • To detect faces from an image, you look for the presence
mobile devices. of the various features that are usually found on human
faces (see Figure 2), such as the eyebrow, where the re-
How exactly does face recognition work, and how can you gion above the eyebrow is lighter than the region below it. Wei-Meng Lee
make use of it using a language that you already know? In • When an image contains a combination of all these weimenglee@learn2develop.net
this article, I’ll walk you through some applications that you features, it is deemed to contain a face. http://www.learn2develop.net
can build to perform face recognition. Most interesting of @weimenglee
all, you can use the applications that I’ll demonstrate to If you’re interested in a visualization of how a face is de-
Wei-Meng Lee is a tech-
recognize your own friends and family members. tected, check out the following videos on YouTube: nologist and founder of De-
veloper Learning Solutions
• https://www.youtube.com/watch?v=hPCTwxF0qf4&t (www.learn2develop.net),
(see Figure 3) a technology company
Buckle-up, and get ready • https://www.youtube.com/watch?v=F5rysk51txQ specializing in hands-on
for some real action! training on the latest
Fortunately, without needing to know how Haar cascades technologies. Wei-Meng
work, OpenCV can perform face detection out of the box us- has many years of training
ing a pre-trained Haar cascade, along with other Haar cas- experiences and his
cades for recognizing other objects. The list of predefined training courses place
Like all my articles, this article is heavily hands-on, so be Haar cascades is available on GitHub at https://github.com/ special emphasis
sure to buckle-up, and get ready for some real action! For opencv/opencv/tree/master/data/haarcascades. on the learning-by-doing
this article, I’m going to assume that you are familiar with approach. His hands-on
Python, and understand the basics of machine learning and For face detection, you’ll need the haarcascade_frontalfa- approach to learning
deep learning. If you need a refresher on these two topics, ce_default.xml file that you can download from the GitHub programming makes
be sure to refer to my earlier articles in CODE Magazine: link in the previous paragraph. understanding the subject
much easier than reading
books, tutorials, and
• Implementing Machine Learning Using Python and Detecting Faces Using Webcam
documentation. His name
Scikit-learn, CODE Magazine, November/Decem- Now that you have a basic idea of how face detection works using
regularly appears in online
ber2017. https://www.codemag.com/Article/1711091/ Haar cascades, let’s write a Python program to turn on a webcam
and print publications such
Implementing-Machine-Learning-Using-Python-and- and then try to detect the face in it. I’ll be using Anaconda. as DevX.com, MobiForge.
Scikit-learn com, and CODE Magazine.
• Introduction to Deep Learning, CODE Magazine
March/April 2020. https://www.codemag.com/Ar-
ticle/2003071/Introduction-to-Deep-Learning

Face Detection
Before I discuss face recognition, it’s important to discuss an- Figure 1: Edges in a Haar cascade that detects various
other related technique: face detection. As the name implies, features in an image
face detection is a technique that identifies human faces in a
digital image. Face detection is a relatively mature technol-
ogy—remember back in the good old days of your digital cam-
era when you looked through the viewfinder? You saw rect-
angles surrounding the faces of the people in the viewfinder.

For face detection, one of the most famous algorithms is


known as the Viola-Jones Face Detection technique, com-
monly known as Haar cascades. Haar cascades are from long
before deep learning was popular and is one of the most
commonly used techniques for detecting faces.

How Do Haar Cascades Work?


A Haar cascade classifier is used to detect the object for
which it has been trained. If you’re interested in a detailed
explanation of the mathematics behind how Haar cascade
classifiers work, check out the paper by Paul Viola and Mi-
chael Jones at https://www.cs.cmu.edu/~efros/courses/
LBMV07/Papers/viola-cvpr-01.pdf.

Although I won’t go into the details of how Haar classifier


works, here is a high-level overview: Figure 2: Detecting the various features of a face

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 51
Figure 3: This video provides a visual approach to
understanding how Haar cascades work Figure 4: Detecting faces using the webcam

First, install OpenCV using the following command at the face recognition is a technique for recognizing faces in an
Anaconda Prompt (or Terminal if you are using a Mac): image. Compared to face detection, face recognition is a
much more complicated process and is an area of much in-
$ pip install opencv-python terest to researchers, who are always looking to improve the
accuracy of the recognition.
Next, create a file named face_detection.py and populate
it with the code shown in Listing 1. Then, download the In this article, I’ll discuss two techniques that you can gen-
haarcascade_frontalface_default.xml file and save it into erally use for face recognition:
the same directory as the face_detection.py file.
• Deep learning using convolutional neural networks (CNN)
To run the program, type the following command in Ana- • Machine learning using support vector machines (SVM)
conda Prompt:

$ python face_detection.py
Deep Learning—Convolutional
Neural Network (CNN)
Figure 4 shows that when the program detects a face, it In deep learning, a convolutional neural network (CNN) is
draws a rectangle around it. If it detects multiple faces, a special type of neural network that is designed to process
multiple rectangles are shown. data through multiple layers of arrays. A CNN is well-suited
for applications like image recognition and is often used in
face recognition software.
Techniques to Recognize Faces
Now that you’ve learned how to detect faces, you are ready In CNN, convolutional layers are the fundamental building
to tackle the bigger challenge of recognizing faces! Face blocks that make all the magic happen. In a typical image
detection is a technique for detecting faces in an image and recognition application, a convolutional layer is made up of

Listing 1: The content of the face_detection.py file


import cv2 rgb, scaleFactor=1.3, minNeighbors=5)

# for face detection # for each faces found


face_cascade = \ for (x, y, w, h) in faces:
cv2.CascadeClassifier( # Draw a rectangle around the face
‘haarcascade_frontalface_default.xml’) color = (0, 255, 255) # in BGR
stroke = 5
# resolution of the webcam cv2.rectangle(frame, (x, y), (x + w, y + h),
screen_width = 1280 # try 640 if code fails color, stroke)
screen_height = 720
# show the frame
# default webcam cv2.imshow(“Image”, frame)
stream = cv2.VideoCapture(0) key = cv2.waitKey(1) & 0xFF
if key == ord(“q”): # Press q to break out
while(True): break # of the loop
# capture frame-by-frame
(grabbed, frame) = stream.read() # cleanup
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) stream.release()
cv2.waitKey(1)
# try to detect faces in the webcam cv2.destroyAllWindows()
faces = face_cascade.detectMultiScale( cv2.waitKey(1)

52 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
several filters to detect the various features of the image. Un-
derstanding how this work is best illustrated with an analogy.

Suppose you saw someone walking toward you from a distance.


From afar, your eyes will try to detect the edges of the figure,
and you try to differentiate that figure from other objects, such
as buildings or cars, etc. As the person walks closer toward you,
you try to focus on the outline of the person, trying to deduce if
the person is male or female, slim or fat, etc. As the person gets
nearer, your focus shifts toward other features of that person,
such as his facial features, if his is wearing specs, etc. In gen-
eral, your focus shifts from broad features to specific features.

Likewise, in a CNN, you have several layers containing


various filters (or kernels, as they are commonly called) in
charge of detecting specific features of the target you’re
trying to detect. The early layer tries to focus on broad
features, while the latter layers try to detect very specific
features. In a CNN, the values for the various filters in each
convolutional layer is obtained by training on a particular Figure 5: A typical Convolutional Neural Network (CNN) architecture
training set. At the end of the training, you have a unique
set of filter values that are used for detecting specific fea-
tures in the dataset. Using this set of filter values, you ap-
ply them on new images so that you can make a prediction
about what is contained within the image.

Figure 5 shows a typical CNN network. The first few convolu-


tional layers (conv1 to conv4) detect the various features (from
abstract to specific) in an image (such as edges, lines, etc.).
The final few layers (the fully connected layers and the final
softmax/logistic layer) are used to classify the result (such as
that the image contains faces belong to person A, B, or C).

Figure 6: The 16 layers in the VGG16 Convolutional Neural Network


A Pooling layer in CNN is used to (figure from https://www.geeksforgeeks.org/vgg-16-cnn-model/)
reduce the size of the representations
and to speed up calculations, VGGFace2 uses a much larger dataset and two models have
as well as to make some of the been trained using:

features it detects a bit more robust. • ResNet-50


• SqueezeNet-ResNet-50 (also known as SENet)

Using VGGFace for Face Recognition ResNet50 is a 50-layer Residual Network with 26M param-
VGGFace refers to a series of models developed for face recog- eters. This residual network is a deep convolutional neural
nition. It was developed by the Visual Geometry Group (hence network that was introduced by Microsoft in 2015.
its VGG name) at the University of Oxford. The models were
trained on a dataset comprised mainly of celebrities, public
figures, actors, and politicians. Their names were extracted
from the Internet Movie Data Base (IMDB) celebrity list based To know more about ResNet-50,
on their gender, popularity, pose, illumination, ethnicity, and go to https://viso.ai/deep-learning/
profession (actors, athletes, politicians). The images of these
names were fetched from Google Image Search, and multiple
resnet-residual-neural-network/.
images for each name were downloaded, vetted by humans,
and then labelled for training.
SENet is a smaller network developed by researchers at
There are two versions of VGGFace: DeepScale, University of California at Berkeley, and Stanford
University. The goal of SENet was to create a smaller neural
• VGGFace: Developed in 2015, trained on 2.6 million network that can easily fit into computer memory and be
images, a total of 2622 people easily transmitted over a computer network.
• VGGFace2: Developed in 2017, trained on 3.31 million
images, a total of 9131 people Let’s now try out how VGGFace works and see if it can ac-
curately recognize some of the faces that we throw at it. For
The original VGGFace uses the VGG16 model, which is a con- this, you’ll make use of the Keras’s implementation of VGG-
volutional neural network with 16 layers (see Figure 6). Face located at https://github.com/rcmalli/keras-vggface.

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 53
For this example, I’ll use Jupyter Notebook. First, you need to Download a copy of his headshot (Matthias_Sammer.jpg)
install the keras_vggface and keras_applications modules: and put it in the same folder as your Jupyter Notebook.
The following code snippet will load the image, resize it to
!pip install keras_vggface 224x224 pixels, convert the image into a NumPy array, and
!pip install keras_applications then use the model to make the prediction:

import numpy as np
from keras.preprocessing import image
The keras_applications module from keras_vggface.vggface import VGGFace
from keras_vggface import utils
provides model definitions and
pre-trained weights for a number # load the image
of popular architectures, img = image.load_img(
‘./Matthias_Sammer.jpg’,
such as VGG16, ResNet50, Xception, target_size=(224, 224))
MobileNet, and more.
# prepare the image
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
To use VGGFace (based on the VGG16 CNN model), you can x = utils.preprocess_input(x, version=1)
specify the vgg16 argument for the model parameter:
# perform prediction
from keras_vggface.vggface import VGGFace preds = model.predict(x)
print(‘Predicted:’,
model = VGGFace(model=’vgg16’) utils.decode_predictions(preds))
# same as the following
model = VGGFace() # vgg16 as default The result of the prediction is as follows:

To use VGGFace2 with the ResNet-50 model, you can specify Predicted:
the resnet50 argument: [[["b' Matthias_Sammer'", 0.9927065],
["b' Bj\\xc3\\xb6rn_Ferry'",
model = VGGFace(model=’resnet50’) 0.0011530316],
["b' Stone_Cold_Steve_Austin'",
To use VGGFace2 with the SENet model, specify the senet50 0.00084367086],
argument: ["b' Florent_Balmont'", 0.00058827153],
["b' Tim_Boetsch'", 0.0003584346]]]
model = VGGFace(model=’senet50’)
From the result, you can see that the probability of the im-
For the example here, I’m going to use the SENet model. age containing Matthias Sammer’s face is 0.9927065 (the
When you run the above code snippets, the weights for highest probability).
the trained model will be downloaded and stored in the
~/.keras/models/vggface folder. Here’s the size (in bytes) Using Transfer Learning to Recognize Custom Faces
of the weights downloaded for each model: The previous section showed how you can use VGGFace2
to recognize some of the pretrained faces. Although this
165439116 rcmalli_vggface_tf_resnet50.h5 is cool, it isn’t very exciting. A more interesting way to
175688524 rcmalli_vggface_tf_senet50.h5 use the VGGFace2 is to use it to recognize the faces that
580070376 rcmalli_vggface_tf_vgg16.h5 you want. For example let’s say that you want to use it to
build an attendance system to recognize students in a class.
As you can observe, the VGG16 weights is the largest at 580 To do that, you make use of a technique called transfer
MB and the ResNet50 is smallest at 165 MB. learning.

Figure 7: Image of Matthias You’re now ready to test the model and see if it could recog- Transfer learning is a machine learning method where a
Sammer (source: https:// nize a face that it was trained to recognize. The first face that model developed for a task is reused as the starting point
en.wikipedia.org/wiki/Matthias_ I want to try is Mattias Sammer (see Figure 7), a German for- for a model on a second task. Transfer learning reduces the
Sammer#/media/File:Matthias_ mer professional football player and coach who last worked as amount of time that you need to spend on training.
Sammer_2722.jpg) sporting director in Bayern Munich.
Recall that in general CNN, models for image classification
can be divided into two parts:

• Feature extraction: the aim of this part is to find the


features in the image.
• Classification: the aim of this part is to use the vari-
ous features extracted in the previous part and clas-
sify the image to the desired classes. For example, if it
sees eyes, nose, eyebrows it can tell it’s a human face
Figure 8: In transfer learning, you just need to retrain the model for the classification part belonging to a specific person.

54 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
Advertisement

WANT TO LIVE
ON MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that will continue
way past the current crisis. Which begs the question: If you can work from home, then why not
make your home in one of the world’s premiere destinations and most desirable living areas?

The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it is uniquely
situated as a place to live. It offers great infrastructure and a wide range of things to do, not to
mention a very high quality of life.

We have teamed up with CODE Magazine and Markus Egger to provide you information about
living in Maui. Markus has been calling Maui his home for quite some time, so he can share his own
experience of living in Maui and working from Maui in an industry that requires great infrastructure.

For more information, and a list of available homes, visit www.Live-On-Maui.com

Steve and Carol Olsen


Maui, Hawai’i

codemag.com Title article 55


When you do transfer learning, you can retain the feature
extraction part of a trained model and only retrain the clas-
sifier part (see Figure 8).

Preprocessing the Training Images


To use transfer learning to train a model to recognize faces,
you need to first obtain images of the people. For this ex-
ample, the model will be trained to recognize:

• Barack Obama
• Donald Trump
• Tom Cruise

In the folder where you saved your Jupyter Notebook files,


create a folder named Headsets and create sub-folders within
it using the names of the people you want to train (see Figure
10). Within each sub-folder you have images of the people.
Figure 9 shows some of the images in the folders.

The images of each person can be of any size, and you


should aim for at least 10 images in each folder.

Figure 9: Folders containing images of the people you want Once the images are prepared and saved in the respective
to recognize folders, you need to perform some preprocessing on the
images before you can use them for training. You need to
extract the faces from the images so that only the faces are
used for training. The steps are as follows:

1. Iterate through all the images in the folders and extract


the face in the image using OpenCV’s Haar cascade.
2. Resize the extracted face to the size required by VG-
GFace16: 224x224 pixels.
3. Replace the original image with the extracted face.

Listing 2 shows the code for preprocessing the images.

Figure 11 shows the face detected in each image and the up-
dated images. It’s possible that in some images there will be
multiple faces detected. In the event that there is no face or
Figure 10: Some of the images in the folders when there are multiple faces detected, the image is discarded.

Figure 11: Detecting faces in the image and updating the images with the detected faces

56 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
Listing 2: Preprocessing the images used for training
import cv2
import os # get the faces detected in the image
import pickle faces = \
import numpy as np facecascade.detectMultiScale(imgtest,
from PIL import Image scaleFactor=1.1, minNeighbors=5)

import matplotlib.pyplot as plt # if not exactly 1 face is detected,


# skip this photo
headshots_folder_name = ‘Headshots’ if len(faces) != 1:
print(f’---Photo skipped---\n’)
# dimension of images # remove the original image
image_width = 224 os.remove(path)
image_height = 224 continue

# for detecting faces # save the detected face(s) and associate


facecascade = cv2.CascadeClassifier( # them with the label
‘haarcascade_frontalface_default.xml’) for (x_, y_, w, h) in faces:

# set the directory containing the images # draw the face detected
images_dir = os.path.join(“.”, headshots_folder_name) face_detect = cv2.rectangle(imgtest,
(x_, y_),
current_id = 0 (x_+w, y_+h),
label_ids = {} (255, 0, 255), 2)
plt.imshow(face_detect)
# iterates through all the files in each plt.show()
# subdirectories
for root, _, files in os.walk(images_dir): # resize the detected face to 224x224
for file in files: size = (image_width, image_height)
if file.endswith(“png”) or
file.endswith(“jpg”) or # detected face region
file.endswith(“jpeg”): roi = image_array[y_: y_ + h,
# path of the image x_: x_ + w]
path = os.path.join(root, file)
# resize the detected head to
# get the label name (name of the person) # target size
label = os.path.basename(root).replace( resized_image = cv2.resize(roi, size)
“ “, “.”).lower() image_array = np.array(
resized_image, “uint8”)
# add the label (key) and its number
# (value) # remove the original image
if not label in label_ids: os.remove(path)
label_ids[label] = current_id
current_id += 1 # replace the image with only the face
im = Image.fromarray(image_array)
# load the image im.save(path)
imgtest = cv2.imread(path,
cv2.IMREAD_COLOR)
image_array = np.array(imgtest, “uint8”)

Importing the Libraries


You’re now ready to train a model using the faces that you from tensorflow.keras.models import Model
extracted in the previous section. First, let’s import the from tensorflow.keras.optimizers import Adam
various modules that you need:
Augmenting the Training Images
import os You use the ImageDataGenerator class to augment the
import pandas as pd images that you have for training. The ImageDataGenera-
import numpy as np tor class applies different transformations to your images
import tensorflow.keras as keras (so that one single image can be transformed into differ-
import matplotlib.pyplot as plt ent images, which is useful when you have a very limited
number of images for training) when you are training the
from tensorflow.keras.layers model.
import Dense, GlobalAveragePooling2D
train_datagen = ImageDataGenerator(
from tensorflow.keras.preprocessing import image preprocessing_function=preprocess_input)
from tensorflow.keras.applications.mobilenet
import preprocess_input train_generator = \
train_datagen.flow_from_directory(
from tensorflow.keras.preprocessing.image ‘./Headshots’,
import ImageDataGenerator target_size=(224,224),

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 57
Listing 3: The various layers in VGGFace16
Model: "vggface_vgg16" pool4 (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________ _________________________________________________________________
Layer (type) Output Shape Param # conv5_1 (Conv2D) (None, 14, 14, 512) 2359808
================================================================= _________________________________________________________________
input_1 (InputLayer) [(None, 224, 224, 3)] 0 conv5_2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________ _________________________________________________________________
conv1_1 (Conv2D) (None, 224, 224, 64) 1792 conv5_3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________ _________________________________________________________________
conv1_2 (Conv2D) (None, 224, 224, 64) 36928 pool5 (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________ _________________________________________________________________
pool1 (MaxPooling2D) (None, 112, 112, 64) 0 flatten (Flatten) (None, 25088) 0
_________________________________________________________________ _________________________________________________________________
conv2_1 (Conv2D) (None, 112, 112, 128) 73856 fc6 (Dense) (None, 4096) 102764544
_________________________________________________________________ _________________________________________________________________
conv2_2 (Conv2D) (None, 112, 112, 128) 147584 fc6/relu (Activation) (None, 4096) 0
_________________________________________________________________ _________________________________________________________________
pool2 (MaxPooling2D) (None, 56, 56, 128) 0 fc7 (Dense) (None, 4096) 16781312
_________________________________________________________________ _________________________________________________________________
conv3_1 (Conv2D) (None, 56, 56, 256) 295168 fc7/relu (Activation) (None, 4096) 0
_________________________________________________________________ _________________________________________________________________
conv3_2 (Conv2D) (None, 56, 56, 256) 590080 fc8 (Dense) (None, 2622) 10742334
_________________________________________________________________ _________________________________________________________________
conv3_3 (Conv2D) (None, 56, 56, 256) 590080 fc8/softmax (Activation) (None, 2622) 0
_________________________________________________________________ =================================================================
pool3 (MaxPooling2D) (None, 28, 28, 256) 0 Total params: 145,002,878
_________________________________________________________________ Trainable params: 145,002,878
conv4_1 (Conv2D) (None, 28, 28, 512) 1180160 Non-trainable params: 0
_________________________________________________________________ _________________________________________________________________
conv4_2 (Conv2D) (None, 28, 28, 512) 2359808 26
_________________________________________________________________
conv4_3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________

Open Source Library for color_mode=’rgb’, input_shape=(224, 224, 3))


Machine Perception batch_size=32, base_model.summary()
class_mode=’categorical’,
Open Source Computer shuffle=True) print(len(base_model.layers))
Vision (OpenCV) is an open # 26 layers in the original VGG-Face
source computer vision and The flow_from_directory() function from the ImageData-
machine learning software
Generator instance generates a tf.data.Dataset from image The model is shown in Listing 3. As you can see, there
library originally developed
files in the specified directory. Using the result, you can find are 26 layers in the VGGFace16 model altogether. I have
by Intel. It was built to provide
a common infrastructure for
out how many classes of images you have: bolded the last seven layers. These seven layers represent
computer vision applications the three fully connected output layers used to recognize
and to accelerate the use of train_generator.class_indices.values() faces.
machine perception in the # dict_values([0, 1, 2])
commercial products. NO_CLASSES = \ base_model = VGGFace(include_top=False,
len(train_generator.class_indices.values()) model=’vgg16’,
OpenCV ships with several input_shape=(224, 224, 3))
pre-trained Haar cascades that base_model.summary()
can detect eyes, faces, Russian
car plates, smiles, and more. The ImageDataGenerator class print(len(base_model.layers))
# 19 layers after excluding the last few
performs image augmentation,
# layers
a very useful technique when
you don’t have enough training If you run the above code snippet, you’ll see that the output
now only has 19 layers. The last seven layers (represent-
data to train your model. ing the three fully connected output layers) are no longer
loaded.

Building the Model Let’s now add the custom layers so that the model can rec-
Next, you’re ready to build the model. First, load the ognize the faces in your own training images:
VGGFace16 model:
x = base_model.output
from keras_vggface.vggface import VGGFace x = GlobalAveragePooling2D()(x)

base_model = VGGFace(include_top=True, x = Dense(1024, activation=’relu’)(x)


model=’vgg16’, x = Dense(1024, activation=’relu’)(x)

58 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
Listing 4: The VGGFace16 now includes the additional layers that you have added to it
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================

...

conv5_3 (Conv2D) (None, 14, 14, 512) 2359808


_________________________________________________________________
pool5 (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
global_average_pooling2d (Gl (None, 512) 0
_________________________________________________________________
dense (Dense) (None, 1024) 525312
_________________________________________________________________
dense_1 (Dense) (None, 1024) 1049600
_________________________________________________________________
dense_2 (Dense) (None, 512) 524800
Figure 12: Sample test faces _________________________________________________________________
dense_3 (Dense) (None, 3) 1539
=================================================================
x = Dense(512, activation=’relu’)(x) Total params: 16,815,939
Trainable params: 16,815,939
Non-trainable params: 0
# final layer with softmax activation _________________________________________________________________
preds = Dense(NO_CLASSES,
activation=’softmax’)(x)

Listing 4 shows the additional layers that you’ve just added:


‘transfer_learning_trained’ +
Finally, set the first 19 layers to be non-trainable and the ‘_face_cnn_model.h5’)
rest of the layers to be trainable:
To verify that the model is saved correctly, delete it from
# don’t train the first 19 layers - 0..18 memory and load it from disk again:
for layer in model.layers[:19]:
layer.trainable = False from tensorflow.keras.models
import load_model
# train the rest of the layers – 19 onwards
for layer in model.layers[19:]: # deletes the existing model
layer.trainable = True del model

Because the first 19 layers were already trained by the # returns a compiled model identical to
VGGFace16 model, you only need to train the new layers that # the previous one
you’ve added to the model. Essentially, the new layers that model = load_model(
you’ve added will be trained to recognize your own images. ‘transfer_learning_trained’ +
‘_face_cnn_model.h5’)
Compiling and Training the Model
You can now compile the model using the Adam optimizer Saving the Training Labels
and the categorical Entropy loss function: Using the ImageDataGenerator instance, you can generate a
mapping of the index corresponding to each person’s name:
model.compile(optimizer=’Adam’,
loss=’categorical_crossentropy’, import pickle
metrics=[‘accuracy’])
class_dictionary = \
Finally, train the model using the following arguments: train_generator.class_indices
class_dictionary = {
model.fit(train_generator, value:key for key, value in
batch_size = 1, class_dictionary.items()
verbose = 1, }
epochs = 20) print(class_dictionary)

Saving the Model The above code prints out the following:
Once the model is trained, it’s important to save it to disk
first. If not, you must train the model again every time you {
want to recognize a face: 0: ‘Barack Obama’,
1: ‘Donald Trump’,
# creates a HDF5 file 2: ‘Tom Cruise’
model.save( }

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 59
Listing 5: Predicting the faces
# for detecting faces # draw the face detected
facecascade = \ face_detect = cv2.rectangle(
cv2.CascadeClassifier( imgtest, (x_, y_), (x_+w, y_+h), (255, 0, 255), 2)
‘haarcascade_frontalface_default.xml’) plt.imshow(face_detect)
plt.show()
for i in range(1,30):
test_image_filename = f’./facetest/face{i}.jpg’ # resize the detected face to 224x224
size = (image_width, image_height)
# load the image roi = image_array[y_: y_ + h, x_: x_ + w]
imgtest = cv2.imread(test_image_filename, resized_image = cv2.resize(roi, size)
cv2.IMREAD_COLOR)
image_array = np.array(imgtest, “uint8”) # prepare the image for prediction
x = image.img_to_array(resized_image)
# get the faces detected in the image x = np.expand_dims(x, axis=0)
faces = facecascade.detectMultiScale(imgtest, x = utils.preprocess_input(x, version=1)
scaleFactor=1.1, minNeighbors=5)
# making prediction
# if not exactly 1 face is detected, skip this photo predicted_prob = model.predict(x)
if len(faces) != 1: print(predicted_prob)
print(f’---We need exactly 1 face; print(predicted_prob[0].argmax())
photo skipped---’) print(“Predicted face: “ +
print() class_list[predicted_prob[0].argmax()])
continue print(“============================\n”)

for (x_, y_, w, h) in faces:

Figure 13: Results of the prediction using VGGFace16

This dictionary is needed so that later on when you perform a import pickle
prediction, you can use the result returned by the model (which is import numpy as np
an integer and not the person’s name) to get the person’s name. import pickle

Save the dictionary object using Pickle: from PIL import Image
import matplotlib.pyplot as plt
# save the class dictionary to pickle from keras.preprocessing import image
face_label_filename = ‘face-labels.pickle’ from keras_vggface import utils
with open(face_label_filename, ‘wb’) as f: pickle.dump(class_
dictionary, f) # dimension of images
image_width = 224
Testing the Trained Model image_height = 224
In the folder where you saved your Jupyter Notebook files,
create a folder named facetest and add samples of images # load the training labels
containing faces of the people you want to recognize. Fig- face_label_filename = ‘face-labels.pickle’
ure 12 shows some of the images in the folders. with open(face_label_filename, “rb”) as \
f: class_dictionary = pickle.load(f)
Import the modules and load the labels for the various faces:
class_list = [value for _, value in
import cv2 class_dictionary.items()]
import os print(class_list)

60 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
The loaded face label is a dictionary containing the mapping Later on, after the prediction, you’ll make use of this list to
of integer values to the names of the people that you have obtain the name of the predicted face.
trained. The above code snippet converted that dictionary
into a list that looks like this: Listing 5 shows how to iterate through all the images in the
facetest folder and send the image to the model for prediction.
['Barack Obama', 'Donald Trump', 'Tom Cruise'] Figure 13 shows some of the results.

Figure 15: SVM aims for maximum separability between


Figure 14: Using the webcam to recognize faces classes of objects

Listing 6: Recognizing the face in real-time


from PIL import Image for (x, y, w, h) in faces:
import numpy as np roi_rgb = rgb[y:y+h, x:x+w]
import cv2
import pickle # Draw a rectangle around the face
from tensorflow.keras.models import load_model color = (255, 0, 0) # in BGR
stroke = 2
# for face detection cv2.rectangle(frame, (x, y), (x + w, y + h),
face_cascade = \ color, stroke)
cv2.CascadeClassifier(
‘haarcascade_frontalface_default.xml’) # resize the image
size = (image_width, image_height)
# resolution of the webcam resized_image = cv2.resize(roi_rgb, size)
screen_width = 1280 # try 640 if code fails image_array = np.array(resized_image, “uint8”)
screen_height = 720 img = image_array.reshape(
1,image_width,image_height,3)
# size of the image to predict img = img.astype(‘float32’)
image_width = 224 img /= 255
image_height = 224
# predict the image
# load the trained model predicted_prob = model.predict(img)
model = load_model(
‘transfer_learning_trained_face_cnn_model.h5’) # Display the label
font = cv2.FONT_HERSHEY_SIMPLEX
# the labels for the trained model name = labels[predicted_prob[0].argmax()]
with open(“face-labels.pickle”, ‘rb’) as f: color = (255, 0, 255)
og_labels = pickle.load(f) stroke = 2
labels = {key:value for key,value in og_labels.items()} cv2.putText(frame, f’({name})’, (x,y-8),
print(labels) font, 1, color, stroke, cv2.LINE_AA)

# default webcam # Show the frame


stream = cv2.VideoCapture(0) cv2.imshow(“Image”, frame)
key = cv2.waitKey(1) & 0xFF
while(True): if key == ord(“q”): # Press q to break
# Capture frame-by-frame break # out of the loop
(grabbed, frame) = stream.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Cleanup
stream.release()
# try to detect faces in the webcam cv2.waitKey(1)
faces = face_cascade.detectMultiScale( cv2.destroyAllWindows()
rgb, scaleFactor=1.3, minNeighbors=5) cv2.waitKey(1)

# for each faces found

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 61
Face Recognition Using Webcam Once the line is drawn to separate the classes, you can then
With the model trained to recognize faces belonging to use it to predict future data. For example, given the snout
Obama, Trump, and Cruise, it would be fun to be able to recog- length and ear geometry of a new unknown animal, you can
nize their faces using the webcam. Listing 6 shows how you now use the dividing line as a classifier to predict whether
can use the webcam to perform the prediction in real-time. the animal is a dog or a cat.

Figure 14 shows the application recognizing the face of Tom


Cruise.
SVM aims for the maximum
Support Vector Machines (SVM) separability between classes.
Another way to perform face recognition is to use support
vector machines (SVM). SVMs are a set of supervised learning
methods that can be used for classifications. SVM finds the
boundary that separates classes by as wide a margin as pos- For the following sections, let’s use SVM to perform face
sible. The main idea behind SVM is to draw a line between two recognition based on the set of test images that you used in
or more classes in the best possible manner (see Figure 15). the previous section.

Figure 16: The dataframe containing the faces

Listing 7: Loading all the face images into a Pandas DataFrame


import os if not label in label_ids:
import imageio label_ids[label] = current_id
import matplotlib.pyplot as plt current_id += 1

new_image_size = (150,150,3) # save the target value


labels.append(current_id-1)
# set the directory containing the images
images_dir = ‘./Headshots’ # load the image, resize and flatten it
image = imread(path)
current_id = 0 image = resize(image, new_image_size)
images.append(image.flatten())
# for storing the foldername: label,
label_ids = {} # show the image
plt.imshow(image, cmap=plt.cm.gray_r,
# for storing the images data and labels interpolation=’nearest’)
images = [] plt.show()
labels = []
print(label_ids)
for root, _, files in os.walk(images_dir):
for file in files: # save the labels for each fruit as a list
if file.endswith((‘png’,’jpg’,’jpeg’)): categories = list(label_ids.keys())
# path of the image
path = os.path.join(root, file) pickle.dump(categories,
open(“faces_labels.pk”, “wb” ))
# get the label name
label = os.path.basename(root).replace( df = pd.DataFrame(np.array(images))
“ “, “.”).lower() df[‘Target’] = np.array(labels)

# add the label (key) and its number df


# (value)

62 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
Listing 8: Testing the model
import cv2 skipped---\n’)
import pickle continue

# loading the model and label for (x_, y_, w, h) in faces:


model = pickle.load(open(‘faces_model.pickle’,’rb’)) # draw the face detected
categories = pickle.load(open(‘faces_labels.pk’, ‘rb’)) face_detect = cv2.rectangle(
imgtest, (x_, y_), (x_+w, y_+h),
# for detecting faces (255, 0, 255), 2)
facecascade = cv2.CascadeClassifier( plt.imshow(face_detect)
‘haarcascade_frontalface_default.xml’) plt.show()

for i in range(1,40): # resize and flatten the face data


test_image_filename = f’./facetest/face{i}.jpg’ roi = image_array[y_: y_ + h, x_: x_ + w]
img_resize = resize(roi, new_image_size)
# load the image flattened_image = [img_resize.flatten()]
imgtest = cv2.imread(test_image_filename,
cv2.IMREAD_COLOR) # predict the probability
image_array = np.array(imgtest, “uint8”) probability = \
model.predict_proba(flattened_image)
# get the faces detected in the image
faces = facecascade.detectMultiScale(imgtest, for i, val in enumerate(categories):
scaleFactor = 1.1, minNeighbors = 5) print(f’{val}={probability[0][i] * 100}%’)
print(f”{categories[
# if not exactly 1 face is detected, model.predict(flattened_image)[0]]}”)
# skip this photo
if len(faces) != 1:
print(f’---We need exactly 1 face; photo

Importing the Packages Splitting the Dataset for Training and Testing
First, install the scikit-image module: Once the images are loaded into the dataframe, you need to
split the dataframe into a training and testing set:
!pip install scikit-image
x = df.iloc[:,:-1]
Then, import all the modules that you need: y = df.iloc[:,-1]

import os x_train, x_test, y_train, y_test = \


import pandas as pd train_test_split(x, y,
import numpy as np test_size = 0.10, # 10% for test
random_state=77,
from sklearn import svm stratify = y)
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import Fine Tuning the Best Model Using GridSearchCV
train_test_split Before you go ahead and start training your model using
SVM, you need to fine-tune the hyperparameters of your
from sklearn.metrics import classification_report, model so that your model works well with the dataset you
accuracy_score, confusion_matrix have. To do so, you can use the GridSearchCV function in
the sklearn module.
import matplotlib.pyplot as plt
from skimage.transform import resize GridSearchCV is a function that is in sklearn’s model_selec-
from skimage.io import imread tion package. It allows you to specify the different values
for each hyperparameter and try out all the possible com-
import pickle binations when fitting your model. It does the training and
testing using cross validation of your dataset, hence the ac-
Loading the Training Images ronym CV in GridSearchCV. The end result of GridSearchCV is
Because all the images in the Headshots folder have been a set of hyperparameters that best fit your data according to
preprocessed (with the faces extracted), you can go ahead the scoring metric that you want your model to optimize on.
and load the faces and save them into a Pandas DataFrame
(see Listing 7). For this example, specify the various values for the C, gam-
ma, and kernel hyperparameters:
The dataframe will look like Figure 16. Each image is
loaded and then resized to a dimension of 150x150, with # trying out the various parameters
a color depth of three channels (RGB). Each image is flat- param_grid = {
tened and then saved—each pixel is saved as a column in ‘C’ : [0.1, 1, 10, 100],
the dataframe (hence there are 15x150x3 columns in the ‘gamma’ : [0.0001, 0.001, 0.1,1],
dataframe representing each image). The last column in the ‘kernel’ : [‘rbf’, ‘poly’]
dataframe specifies who the face belongs to. }

codemag.com Implementing Face Recognition Using Deep Learning and Support Vector Machines 63
Figure 17: Some of the positive results using the trained SVM model

SPONSORED SIDEBAR: For my dataset, I get the following result:

Ready to Modernize The model is 85.71428571428571% accurate


a Legacy App?
I will now save the trained model onto disk:
Need FREE advice on
migrating yesterday’s legacy pickle.dump(model,
applications to today’s
open(‘faces_model.pickle’,’wb’))
modern platforms? Get
answers by taking advantage
of CODE Consulting’s years of
Making Predictions
experience by contacting Finally, let’s use the set of test faces to see how well the
us today to schedule your free model performs (see Listing 8).
hour of CODE consulting call.
No strings. No commitment. Figure 17 shows some of the positive results.
Nothing to buy.
For more information, However, there are also some misses (see Figure 18).
visit www.codemag.com/ Figure 18: A negative prediction using the trained SVM model
consulting or email us at
info@codemag.com. Summary
I’ve attempted to cover a few huge topics in the span of a
svc = svm.SVC(probability=True) single article. One topic that I briefly touched on was con-
print(“Starting training, please wait ...”) volutional neural network (CNN), which, by itself is a topic
that requires an entire book to cover in detail. Nevertheless,
# exhaustive search over specified hyper parameter I hope you now have a good idea of how to perform face rec-
# values for an estimator ognition using the various libraries covered in this article.
model = GridSearchCV(svc,param_grid) There are some other techniques and tools that you can also
model.fit(x_train, y_train) use, but I will leave it for another day. Have fun!

# print the parameters for the best performing model  Wei-Meng Lee
print(model.best_params_) 

After a while, you should see something like the following


(based on my dataset of images):

{'C': 10, 'gamma': 0.0001, 'kernel': 'rbf'}

At this moment, you have also trained a model using the


above parameters.

Finding the Accuracy


With the model trained, you want to use the test set to see
how your model performs:

y_pred = model.predict(x_test)
print(f”The model is
{accuracy_score(y_pred,y_test)
* 100}% accurate”)

64 Implementing Face Recognition Using Deep Learning and Support Vector Machines codemag.com
Compare. Buy. Build.
Discover the best software components and tools

512 1,347 50,379 1,679 25


Commercial Features Data Points Hours of Years of
Products Compared Collected Research Knowledge

www.componentsource.com/compare

Licensing Experts Specializing in


available 24 hours Mon-Fri Perpetual Licenses Upgrades
Timed Licenses Old Versions
Subscriptions Lapsed Renewals
Call 888.850.9911 Renewals License Co-terms
sales@componentsource.com www.componentsource.com
ONLINE QUICK ID 2205091

Distributed Caching in ASP.NET 6


Using Redis in Azure
Caching is a technique that can be used to store relatively stale data for faster retrieval when needed by the application. You can
have two approaches to caching data in ASP.NET 6: the in-memory cache and distributed cache. This article provides a deep dive
on caching, why it’s important, distributed and in-memory caches, and how to work with Azure Cache for Redis in ASP.NET 6 Core.

Prerequisites occurs when the data is not available in the cache. An ap-
If you’re to work with the code examples discussed in this plication can leverage the benefits of caching if there are
article, you need the following installed in your system: many more cache hits than cache misses.

• Visual Studio 2022 Caching can dramatically increase an application’s perfor-


• .NET 6.0 mance and scalability by minimizing resource consump-
• ASP.NET 6.0 Runtime tion and the effort needed to generate content. Caching
• An Azure subscription. If you don’t have one, you can get is a good choice when your data is relatively stable, i.e.,
it from here: https://azure.microsoft.com/en-us/free/ it works best with data that rarely changes. ASP.NET Core
Joydip Kanjilal supports several caches, such as in-memory caches and dis-
joydipkanjilal@yahoo.com If you don’t already have Visual Studio 2022 installed in tributed caches. The IMemoryCache is the most basic cache
your computer, you can download it from here: https://vi- and resides in your Web server’s memory.
Joydip Kanjilal is an MVP sualstudio.microsoft.com/downloads/.
(2007-2012), software Unlike other caching strategies where your cache data resides
architect, author, and on an individual Web server, a distributed cache is shared by
speaker with more than Scalability and Elasticity several application servers, often managed independently of
20 years of experience.
An application’s scalability is its ability to handle increased the application servers that use it. A distributed cache may
He has more than 16 years
transaction loads without slowing down. In other words, it’s provide a greater scale-out than an in-memory cache. More-
of experience in Microsoft
.NET and its related the capacity to continue operating at the same speed even over, it can significantly improve the performance, scalability,
technologies. Joydip has when a new workload has been introduced. A scalable appli- and responsiveness of an ASP.NET Core application.
authored eight books, cation is adept at adapting to increasing demands, such as
more than 500 articles, an increased number of concurrent users, and transactions
and has reviewed more per second, over time. One of the primary advantages of the
than a dozen books. microservices architecture is the ability to scale, i.e., the Take advantage of distributed
ability to withstand an increase in network traffic and other caching using Redis in Azure
resource needs over time.
for building high performance,
The terms “scalability” and “elasticity” might seem similar, scalable, and reliable applications.
but they are not the same. Although both refer to boosting
the application’s capacity to withstand workload, there are
subtle distinctions. Scalability refers to the system’s ability
to handle increasing demands simply by adding resources, How Does It Work?
either by making hardware stronger (scale-up) or adding First, an application attempts to read data from the cache. If
extra nodes (scale out). Elasticity is the capacity to fit the the requested data is unavailable in the cache, the applica-
resources required to deal with demands dynamically. tion obtains it from the actual data source. The data is then
returned and cached for future requests for the same piece
Although scalability can help accommodate a static increase of data. All subsequent requests for the same piece of data
in workload, elasticity can handle dynamic changes in re- are served from the cache instead of the actual data source.
source requirements. Elasticity is the ability to dynamically Because data usually resides in memory, this enhances the
grow or shrink the infrastructure resources, i.e., increase application’s performance and scalability. If the database
or decrease computer processing, memory, and storage is unavailable, requests for the data are served from the
resources on demand. This can help you acquire resources cache, thus enhancing the application’s availability.
when you need them and relinquish them when they’re no
longer required. Distributed Caching
When your cached data is distributed, the data is consis-
tent across server restarts and application deployments.
What Is Caching? The IDistributedcache interface pertaining to the Microsoft.
A cache is a component (either software or hardware) that Extensions.Caching.Distributed namespace represents a dis-
stores data, usually for a short duration, to meet future tributed cache. To manipulate the data stored in the distrib-
demands for that data. Depending on whether the data uted cache, you can use the following methods:
searched for in the cache is available, a cache hit or a cache
miss might occur. A cache hit refers to a situation when the • Get or GetAsync: To retrieve items from the distrib-
requested data is available in the cache and a cache miss uted cache based on its key

66 Distributed Caching in ASP.NET 6 Using Redis in Azure codemag.com


• Set or SetAsync: To store items into the distributed cache
• Refresh or RefreshAsync: To refresh an item in the
distributed cache
• Remove or RemoveAsync: To delete an item from the
distributed cache based on its key

The following types extend this interface:

• Microsoft.Extensions.Caching.Distributed.MemoryDis-
tributedCache
• Microsoft.Extensions.Caching.Redis.RedisCache
• Microsoft.Extensions.Caching.SqlServer.SqlServerCache
• Microsoft.Extensions.Caching.StackExchangeRedis.
RedisCache

What Is Redis and Why Should I Use It?


Redis is an open-source, high performance, in-memory data
store available for commercial and non-commercial use to
store and retrieve data in your applications. Redis supports
several data structures such as hashes, lists, sets, sorted
sets, bitmaps, etc. Used primarily as a database, cache, or
message broker, you’ll notice only negligible performance
overhead when reading or writing data using Redis. Redis Figure 1: Redis Cache in Azure
is an excellent choice if your application requires a large
amount of data to be stored and retrieved, and memory
availability is not an issue.

Comparing Managed Redis Services on AWS, Azure,


and GCP
All three cloud giants provide Redis Cache services: Amazon,
Azure, and Google Cloud.

Amazon ElasticCache is a caching solution in the cloud that


works with both Memcached and Redis. ElasticCache helps
you optimize application performance by allowing you to
access data from fast, controlled in-memory caches rather
than slower disk-based databases.

Azure Redis Cache is a fully managed, open-source in-memory


data storage solution that works with Azure database services
like Cosmos DB. It’s a cost-effective way of increasing your
data layer’s read and write throughput. With the help of the
cache-aside approach, you can store and disseminate data-
base queries, session states, static content, and so on.

The Redis service provided by Google Cloud Platform (GCP) Figure 2: The Web Server pushes relatively stale data to Redis Cache
is called Cloud Memorystore. Although you can export and
import Redis RDB data between your servers and GCP, native
backup options are not supported by Cloud Memorystore. Follow the steps outlined below to create a new Redis Cache
resource in Azure.
Set Up Redis Cache in Azure 1. Sign-in to the Azure portal. If you don’t have an ac-
Azure Cache for Redis is secure in-memory cache for data count, you can create one for free (the link is in the
storage and retrieval. It’s fully managed, and you can use Prerequisites section).
it to build high-performance applications that have scal- 2. Click Create a resource to create your Azure Redis re-
able architectures. You can take advantage of Redis Cache in source.
Azure to handle massive volumes of requests per second, as 3. Click on Databases and then select Azure Cache for
illustrated in Figure 1. Redis.

You can use it to build cloud or hybrid deployments to man- Figure 3 illustrates creating a new resource.
age enormous volumes of requests per second. In this sec-
tion, I’ll examine how to set up Redis Cache in Azure. Figure 1. In the New Redis Cache page, specify the subscription
2 shows a Web server retrieving data from the database and plan, the resource group (you can select an existing one
then pushing the data (usually relatively stale data is stored or select one from the dropdown list), the DNS name,
in the cache) to Redis Cache. your server location for using Redis, and the cache type.

codemag.com Distributed Caching in ASP.NET 6 Using Redis in Azure 67


Figure 3: Create a new resource in Azure

Figure 4: Create a new Redis Cache resource in Azure

Refer to Figure 4 to see the items from Step 4. newly created resource to specify the connection string. This
is needed by any application to connect to your Azure Redis
1. Next, choose the Network Connectivity to be used. Cache resource. Now follow the steps outlined below to con-
2. In the Advanced tab, select the Redis version to be used. nect to your Azure Redis Cache resource:
3. Finally click on Review + Create to create the Redis 1. On the home page of the Azure portal click on Re-
resource in Azure. source groups.
2. Once the Resource groups page is displayed, select the
Figure 5 illustrates specifying the configuration details. resource group that is associated with the Azure Redis
cache resource you’ve just created
Configure Redis Cache Connection String
Now that you’ve created your Azure Redis Cache resource, Figure 6 illustrates the resource group for your Redis Cache
the next step is to configure it. You should configure the resource.

68 Distributed Caching in ASP.NET 6 Using Redis in Azure codemag.com


Figure 5: Specify the configuration details for your Redis Cache and create it.

1. Click on the Azure Cache for Redis instance. for HTTPS, Enable Docker Support, and the Enable
2. Select Access keys under settings and copy the primary OpenAPI support checkboxes are unchecked because
or secondary connection string from there you won’t use any of these in this example.
6. Click Create to complete the process.
Figure 7 shows you how to specify access keys.
You’ll use this application in the subsequent sections of this
In the next section, I’ll examine how to use this connection article.
string to connect to your Azure Redis Cache instance from
ASP.NET 6 applications. Install NuGet Package(s)
So far so good. The next step is to install the necessary NuGet
Package(s). To install the required packages into your project,
Programming Redis Cache in right-click on the solution and the select Manage NuGet Pack-
ASP.NET Core 6 ages for Solution.... Now search the two packages named Mi-
In this section, you’ll implement a simple application that crosoft.Extensions.Caching.StackExchangeRedis and StackEx-
takes advantage of the Redis cache in Azure to cache relatively change.redis in the search box and install these packages one
stale data. You’ll be using ASP.NET 6 in Visual Studio 2022 IDE. at a time. Alternatively, you can type the commands shown
below at the NuGet Package Manager Command Prompt:
Create a New ASP.NET 6 Project in Visual Studio 2022
Let’s start building the producer application first. You can PM> Install-Package Microsoft.Extensions.
create a project in Visual Studio 2022 in several ways. When Caching.StackExchangeRedis
you launch Visual Studio 2022, you’ll see the Start window. PM> Install-Package StackExchange.Redis
You can choose Continue without code to launch the main
screen of the Visual Studio 2022 IDE.
Configure the Redis Cache Instance
To create a new ASP.NET 6 Project in Visual Studio 2022: You can use the following code snippet to specify the Redis
connection string in the Program class.
1. Start the Visual Studio 2022 Preview IDE.
2. In the Create a new project window, select ASP.NET services.AddStackExchangeRedisCache(option =>
Core Web API, and click Next to move on. {
3. Specify the project name as AzureRedisCacheDemo and option.Configuration =
the path where it should be created in the Configure Configuration.GetConnectionString
your new project window. ("Your_RedisCache_Connection_String");
4. If you want the solution file and project to be created in option.InstanceName = “master”;
the same directory, you can optionally check the Place });
solution and project in the same directory checkbox.
Click Next to move on. Note how the AddStackExchangeRedisCache service is reg-
5. In the next screen, specify the target framework and istered, and the Configuration property assigned the Azure
authentication type as well. Ensure that the Configure Redis connection string.

codemag.com Distributed Caching in ASP.NET 6 Using Redis in Azure 69


Figure 6: Resource group associated with the Redis Cache resource

Figure 7: Specify access keys to restrict access to your Redis Cache.

Connect to the Redis Cache Instance Connect(redisCacheConnection);


To connect to the Redis instance you can use the following code: });

private static public static ConnectionMultiplexer Connection


Lazy<ConnectionMultiplexer> {
lazyConnection = new Lazy get
<ConnectionMultiplexer>(() => {
{ return lazyConnection.Value;
string redisCacheConnection = }
_config["RedisCacheSecretKey"]; }
return ConnectionMultiplexer.

70 Distributed Caching in ASP.NET 6 Using Redis in Azure codemag.com


Listing 1: The Product Controller Class
[ApiController] var dataAsByteArray =
[Route("[controller]")] await _cache.GetAsync("products");

public class ProductController : ControllerBase if ((dataAsByteArray?.Count() ?? 0) > 0)


{ {
private readonly List<Product> products = new List<Product> serializedData = Encoding.UTF8.GetString
{ (dataAsByteArray);
new Product var products = JsonSerializer.Deserialize
{ <List<Product>>(serializedData);
Id = 1,
Name = "Lenovo Laptop", return Ok(new ProductResponse()
Price = 175000.00, {
Quantity = 150 StatusCode = HttpStatusCode.OK,
}, IsDataFromCache = true,
new Product Data = products,
{ Message = "Data retrieved from Redis Cache",
Id = 2, Timestamp = DateTime.UtcNow
Name = "DELL Laptop", });
Price = 185000.00, }
Quantity = 250
}, serializedData = JsonSerializer.
new Product Serialize(products);
{ dataAsByteArray =
Id = 3, Encoding.UTF8.GetBytes
Name = "HP Laptop", (serializedData);
Price = 195500.00, await _cache.SetAsync
Quantity = 200 ("products", dataAsByteArray);
}
}; ProductResponse productResponse =
new ProductResponse()
private readonly IDistributedCache _cache; {
StatusCode = HttpStatusCode.OK,
public ProductController(IDistributedCache cache) IsDataFromCache = false,
{ Data = products,
_cache = cache; Message = "Data not available
} in Redis Cache",
Timestamp = DateTime.UtcNow
};
[HttpGet]
public async Task<IActionResult> Get() return Ok(productResponse);
{ }
string? serializedData = null; }

Note how a single connected ConnectionMultiplexer in- { get; set; }


stance is created in a thread-safe manner. A critical aspect public string? Message
of ConnectionMultiplexer is that it restores the connection { get; set; }
to the cache immediately after the network outage or other public object? Data
issues are resolved. { get; set; }
public bool IsDataFromCache
The Product Class { get; set; }
Create a new file called Product.cs and write the following public DateTime Timestamp
code in there: { get; set; }
}
public class Product
{ The ProductController Class
public int Id { get; set; } Create a new API controller class named ProductController
public string? Name { get; set; } with the code from Listing 1 in there.
public double Price { get; set; }
public int Quantity { get; set; } The ProductController class contains one HttpGet action
} method that returns an instance of ProductResponse. The
action method first checks whether the data is available in
You’ll use the Product class as the model in the application the cache. If it’s available, the action method returns that
you’ll be building here. data. If it isn’t available in the cache, the data is fetched
from the in-memory list called products, and the same data
The ProductResponse Class is persisted in the cache as well. Note how dependency in-
To keep things simple, you’ll have a controller class named Pro- jection is used to inject an instance of type IDistributed-
ductController with only one action method. This action method Cache in the constructor of the ProductController class.
returns an instance of the ProductResponse class given below:

public class ProductResponse Cache Expiration


{ You can also implement cache expiration strategies in your
public HttpStatusCode StatusCode application. A cache expiration strategy is used to specify how

codemag.com Distributed Caching in ASP.NET 6 Using Redis in Azure 71


Figure 8: Delete the Resource Group associated with the Redis Cache resource in Azure.

SPONSORED SIDEBAR: and when the data residing in the cache will expire. There are Figure 8 illustrates how you can delete the resource group
two ways in which you can implement cache expiration: associated with your Redis Cache resource.
Get .NET 6 Help
for Free • Absolute Expiration: This denotes that maximum 1. Once prompted for confirmation, enter the name of the
time period to store data in the cache. Once this time resource group you’d like to delete.
How does a FREE hour- elapses, Redis deletes all keys and their correspond-
long CODE Consulting ing data. That’s all you need to do! Your resource group will be de-
virtual meeting with our
• Sliding Expiration: This denotes the maximum time leted in a few minutes.
expert .NET consultants
period to store a piece of data when the application is
sound? Yes, FREE. No
not consuming the data.
strings. No commitment. Where Should I Go from Here?
No credit cards. Nothing to
buy. For more information, You can write the following piece of code to implement Now that you’re aware of how to work with Redis Cache in
visit www.codemag.com/ cache expiration: Azure, you can take advantage of Application Insights in
consulting or email us at Azure to know the performance of your application over
info@codemag.com. var expiration = new time. This will help you to analyze the performance improve-
DistributedCacheEntryOptions{ ment you’d gain by leveraging Redis Cache in Azure. You
AbsoluteExpirationRelativeToNow = can also use a database in lieu of the in-memory data store
TimeSpan.FromSeconds(30), you’ve used in this example to store data permanently.
SlidingExpiration =
TimeSpan.FromSeconds(25)};
Conclusion
Redis is a powerful distributed caching engine that provides
Clean Up key-value pair caching with very low latency. Redis may
Now that you’re done using the resources in Azure, it’s high significantly improve application performance when used
time that you delete the resources you’ve used to avoid be- in the correct business context. Caching works better when
ing billed. Follow the steps outlined below to delete the re- the data changes infrequently, i.e., when the cached data
sources used in this example: doesn’t change often. Remember, caching is a feature that
helps speed up the performance, scalability, and respon-
1. Sign into the Azure portal. siveness of your application but your application should be
2. Select Resource groups. properly tested to never depend on cached data.
3. Enter the name of the resource group in the filter textbox.
4. When the resource group is listed in the results list,  Joydip Kanjilal
select it, and click “Delete resource group”. 

72 Distributed Caching in ASP.NET 6 Using Redis in Azure codemag.com


CODE COMPILERS

(Continued from 74) troller (MVC) Pattern and templating are familiar,
because Rick was showing us how to do that 25
who we lost a few years ago. We got into some years ago in FoxPro.
mammoth debates over things that today seem May/Jun 2022
trivial. But that spirit of community, like a fam-
ily, with all its dysfunction, was a truly wonderful Relationships Volume 23 Issue 3

thing. For me, the most significant personal relation- Group Publisher
ship and most meaningful professional relation- Markus Egger
It was in that community where members like ship was due to my relationship with FoxPro. Associate Publisher
Steve Black turned us all on to how Design Pat- My best friend is Rod Paddock with whom I’ve Rick Strahl
terns and the Gang of Four (GoF) book applied to worked on many projects, application-wise and Editor-in-Chief
our work. It was all new and exciting, especially book-wise, and also on this magazine for over 20 Rod Paddock
when 1995 arrived, as well as Visual FoxPro and years. We met in 1994 in Toronto at FoxTeach. He
Managing Editor
object orientation! How we fell in love with in- wrote some cool content in the FoxTalk newslet- Ellen Whitney
heritance! How we hated all the problems that ter, the same publication where I got my start in
created! The bigger point was that we were all professional writing. For the record, we didn’t get Contributing Editor
John V. Petersen
learning and sharing together. The FoxPro com- paid hard money for those articles. Instead, the
munity was truly exceptional and no other com- currency was coffee from this little Seattle-based Content Editor
Melanie Spiller
munity since then has ever come close to that coffee roaster called Starbucks. That’s right…
experience, at least not for me. Speaking of Steve even coffee has a FoxPro connection for me! It Editorial Contributors
Black, he also introduced us to the Wiki concept was at that conference were Rod and I struck up Otto Dobretsberger
Jim Duffy
that was first introduced by Ward Cunningham our friendship and he told me about a book proj- Jeff Etter
(Agile Manifesto, SOLID Programming). Check- ect he was on. That project became the book Vi- Mike Yeager
out fox.wikis.com. If there’s one phrase that sual FoxPro Enterprise Development. That project
Writers In This Issue
described the whole FoxPro experience, it’s the included me, Rod, Ron Talmage, and another guy Bilal Haidar Joydip Kanjilal
notion of “applied theory.” There’s theory, and named Eric Ranft. Eric went on to co-find a little Wei-Meng Lee Julie Lerman
then there’s the notion of applying theory in a e-signature company called DocuSign. Sahil Malik Jeremy Miller
Paul D. Sheriff Shawn Wildermuth
way that makes it useful. Ever since my FoxPro
days, that philosophy has permeated my thinking Although FoxPro as an active product is now his- Technical Reviewers
and writing. tory, its legacy is as relevant today as ever. I’m Markus Egger
Rod Paddock
convinced that .NET, in no small part, owes its
utility to that FoxPro acquisition. The people and Production
Frameworks ethos that were brought to bear on the Microsoft
Friedl Raffeiner Grafik Studio
www.frigraf.it
We rely on all sorts of frameworks today: Angular, ecosystem has paid big dividends for the devel- Graphic Layout
React, etc. And as previously mentioned, what opment world at large. And if you need any more Friedl Raffeiner Grafik Studio in collaboration
we used to call public domain software is open reminding of that fact, you’re reading this maga- with onsight (www.onsightdesign.info)
source today. It was through FoxPro that I was zine, right? Printing
introduced to the first, serious way to structure Fry Communications, Inc.
applications. This tied together design patterns,  John V. Petersen 800 West Church Rd.
Mechanicsburg, PA 17055
libraries, and other approaches to building tools. 
Advertising Sales
One of the big names in FoxPro history is a guy Tammy Ferguson
832-717-4445 ext 26
named Yair Alan Griver (YAG). Once upon a time, tammy@codemag.com
he had a little shop in River Edge, NJ called Flash
Creative Management and he created a thing Circulation & Distribution
General Circulation: EPS Software Corp.
called the Codebook. It was a somewhat opinion- Newsstand: American News Company (ANC)
ated way of documenting and application and
applying conventions. My framework of choice Subscriptions
Subscription Manager
was something called FoxExpress by Mike and Colleen Cade
Toni Feltman (Fox Software alums!!). The point ccade@codemag.com
is that through the community, we were all in it
US subscriptions are US $29.99 for one year. Subscriptions
together, learning and teaching each other. outside the US are US $50.99. Payments should be made
in US dollars drawn on a US bank. American Express,
Another great framework was WestWind Web Con- MasterCard, Visa, and Discover credit cards accepted.
Bill me option is available only for US subscriptions.
nection by Rick Strahl (co-founder of CODE Maga- Back issues are available. For subscription information,
zine). I remember, way back in the mid-1990s, e-mail subscriptions@codemag.com.
Rick showing us how we could build Web applica-
Subscribe online at
tions with FoxPro. If you went to fox.wikis.com, www.codemag.com
take note of the wc.dll in the URL when you navi-
gate to a page. WC stands for Web Connect. Yes, CODE Developer Magazine
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Rick still maintains that framework, along with Phone: 832-717-4445
producing what I still regard as the best scholar-
ship and work in Web development today. Before
there was ASP, Ruby on Rails, or Node.js, there
was West Wind Web Connect. Today the basic pat-
terns we employ, such as the Model-View-Con-

codemag.com CODA: It Was 30 Years Ago This May… 73


CODA

CODA: It Was 30 Years


Ago This May…
…when Microsoft bought a little software company from Perrysburg, OH named Fox Software. It’s hard
to believe that three decades have passed since that transaction. Not a single day since then has passed
when that acquisition hasn’t positively impacted my life. As languages go, FoxPro wasn’t any more or

less remarkable than anything else. As an all-up Pro developer, even just reading this magazine, owned Fox Software, had the foresight to patent
environment that included an integrated data en- you’ve been touched by the Fox! If you’re super what is, simply stated, a very optimized approach
gine and SQL to complement its language, FoxPro interested in a more detailed history, go to fox- to indexing data and this yielded fast query re-
was second to none. But there was something prohistory.org. Yes, there is such a site. I cite the sults. That was FoxPro’s chief stock-in-trade. And
more to it. incontrovertible maxim of Rule 34: The clean ver- that’s why FoxPro veterans of that era are keenly
sion is that there’s a SIG (Special Interest Group) adept at dealing with large amounts of data ef-
In these pages, I often write about people, pro- for everything <gd&r>. fectively and efficiently. It’s that IP that MS was
cesses, and tools, in that order. For me, what was interested in and it’s that IP that found its way
perhaps more important than the FoxPro tool was Let’s visit some things that owe their existence to into many of MS’s future initiatives.
the FoxPro people. We often hear the word com- or have been greatly impacted by FoxPro.
munity in the context of social media. But once When it came to leveraging all Fox had with re-
upon a time, before Facebook, blogs, etc., there gard to data, I have to tip my hat to three former
was CompuServe. And in this context, there were The MVP Program co-workers: George Goley, Melissa Dunn, and Dr.
the CompuServe FoxPro forums. It was in those Once upon a time, Microsoft’s MVP program was Michael Brachman. We all worked at Microen-
forum spaces where I was exposed to and learned known as “Calvin’s list.” Calvin Hsia was one of deavors (MEI), just outside Philadelphia. Yes, we
the importance of community. Those forums, the lead Fox developers. He kept a list of Com- were the best FoxPro shop in the world, hands
among other things, were the Stack Overflow of puServe members who were most helpful on the down!! Another alum of that shop and somebody
their time. Whether it was a question about the FoxPro forums. A few folks that were helpful to very famous in FoxPro history was the late Drew
product, how to optimize a query, or something me and many others were people like Pat Adams, Speedie, with whom I had the pleasure of work-
more complex, such as application design, you’d Lisa Slater, and Tom Rettig, to name just a few. ing and learning much from. And when it came
surely get answers to your question. And, quite Although I’ll touch upon community later, the time to work with SQL Server-based data in an
likely, you’d receive several answers, often in the magic that was lightning in a bottle were these approachable way, there was no better person to
context of spirted but friendly debate. “elder states people” who were always there to explain that than Robert Green. Robert was the
lend a helping hand. There was one basic rule: FoxPro product manager for many years, and then
It was often good to reflect on the road traveled Pay it forward. That’s certainly what I’ve endeav- eventually moved on to helping .NET become the
and that’s what I’d like to do in this issue. Al- ored to do, following in their footsteps. Eventu- fantastic framework it has been for 20 years.
though FoxPro is no longer an active product, its ally, the MVP program started around 1994-5 and
spirit is alive and well, due in no small part to its it was around that time I joined those ranks. By
legacy and its people. It’s an anniversary deserv- then, other MS-related technologies were part of Community
ing of celebration and reflection. that MVP program. At that time, there may have There are many communities today, thanks to
been around 600 MVPs world-wide. And it had all social media platforms like Facebook and the
It’s also worth noting that this year, .NET turns started with Calvin’s list. ready availability of broadband. But once upon
20! Yes, there’s a FoxPro connection there too. a time, when we were limited to 2400 or 4800
It was at the 1993 FoxPro Devcon in Orlando baud modem connectivity via a US Robotics Mo-
Florida. The keynote that year was given by Roger Data dem, we had something called CompuServe. My
Heinen, the Microsoft Developer Tools VP. In that Remember ActiveX Data Objects? Predating ADO CompuServe ID was 72722,1243. Why I remember
talk, Roger spoke of the “Unified Language Strat- was ODBC (Open Database Connectivity). In the that I don’t know. Nevertheless, we had a great
egy.” That strategy eventually led to what, nine ODBC days, we had drivers. Eventually, it all led online community where we virtually hung out,
years later, became .NET. to the Entity Framework and other Object Rela- debated, and most importantly, helped each oth-
tional Mapping (ORM) libraries. Going back to er. Debates could be very spirited!
It was at that Devcon that I remember seeing two ADO, we had a similar concept called providers.
whiz kids. One was Ken Levy and his cool tool The most compliant provider was the Jet Engine, Eventually, CompuServe gave way to something
GenscreenX, a screen generator pre- and post- which was part of Access. ADO dealt with a client- known as the UniversalThread. The UT, as it was
processing tool. FoxPro was always “open” in its side notion of data known as a CURSOR (CURrent known, was another great place to discuss, de-
architecture. The screen and report generators Set Of Records). bate, and help each other. It was all organic,
were written in FoxPro! The extensions to those something that just existed. It wasn’t created or
facilities were referred to as public domain at One of Fox’s strengths was the notion of an in- conjured. It just happened. And ever since, there
the time. Today, we refer to such things as open tegrated database engine. The real magic was have been many attempts to recreate that magic.
source. when ANSI SQL was added to the Fox language, When I think of those days, a few names come
which itself was a variant of XBase, like Clipper to mind. I fondly remember the late John Koziol,
The other whiz kid is the publisher of this maga- and Dbase. FoxPro’s magic sauce was branded
zine, Markus Egger. Even if you weren’t a Fox- as Rushmore Technology. Dr. Dave Fulton, who (Continued on page 73)

74 CODA: It Was 30 Years Ago This May… codemag.com


UR
GET YO R
OU
FREE H

TAKE
AN HOUR
ON US!

Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • info@codemag.com
shutters
tock/Lu
cky-pho
tograp
her
NEED
MORE OF THIS?

Is slow outdated software stealing way too much of your free time? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • info@codemag.com

You might also like