Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Revista CODE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Legal Notes: Trade Secrets, SQL Server, Kendo UI, jQuery, Visual Studio

NOV
DEC
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 5.95 Can $ 8.95

2015

Visual Studio 2015


The Dawn of a New Era!

TypeScript: The Best Way to Write JavaScript


Learn to Use HTML and Client-Side JavaScript for Data Edit UIs (CRUD)
Use OpenWeatherMap and AFNetworking in Your App
Sponsored by:

TABLE OF CONTENTS

Features
10 CRUD in HTML, JavaScript, and jQuery
Paul begins a new series on working within HTML and the Web API with
this close-up look at JavaScript and jQuery. Youll learn some of the juicy
details, but youll also get a good overview of what these technologies
can do for you.

Paul D. Sheriff

18 
Legal Notes: Trade Secrets
Whats the difference between a trade secret and a patentable idea?
What about copyrighthow does that work into the equation?
John makes it all clear with a focus on trade secrets.

John V. Petersen

22 TypeScript: The Best Way to Write JavaScript


In this continuation of his series, Sahil focuses on TypeScript and why its
mandatory if you want to write good, reliable code in JavaScript.

Sahil Malik

30 
The Bakers Dozen: A 13-Question Pop Quiz of
SQL Server Items
If youve ever wondered how your SQL Server knowledge stacked up, youll want
to take Kevins unofficial test. He explains both the right and wrong answers,
so no matter what your skills are, youre bound to learn something new.

Kevin Goff

40 
Building a Weather App using OpenWeatherMap
and AFNetworking
Take a look at third-party applications and code before sitting down to develop
because the tools you need to build your masterpiece might already be
available. Jason shows us some clever shortcuts as he builds a weather app.

Jason Bender

64 Telerik Kendo UI Outside the Box


Bilal takes us on a tour of Teleriks Kendo UI and its great number of
widgets that facilitate your Web or mobile app development process.
Theres no need for multiple libraries anymore!

Bilal Haidar

Columns
8 Managers Corner: The Long View!
Take your company to the next level when you consider all of
a clients requests. No one can see the future, but its possible
that the unreasonable thing that your client asks for is your
best guess at how to grow your own business.

Mike Yeager

74 Managed Coder: On Motivation


Ted Neward

Departments
6 Editorial
17 Advertisers Index
73 Code Compilers

48 Visual Studio 2015: Ushering in a New Paradigm


Youre going to have to hold onto your hat! Jeffrey looks at whats new in
VS 2015, and its all good.

Jeffrey Palermo

Sponsored by:

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay US $44.99. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill me option is available only for US subscriptions. Back issues are available. For subscription information, send
e-mail to subscriptions@codemag.com or contact customer service at 832-717-4445 ext 028.
Subscribe online at codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 300, Spring, TX 77379 U.S.A.
Canadian Subscriptions: Canada Post Agreement Number 7178957. Send change address information and blocks of undeliverable copies to IBC, 7485 Bath Road, Mississauga,
ON L4T 4C1, Canada.

Table of Contents

codemag.com

ONLINE QUICK ID 00
EDITORIAL

A Psychopath Maintains
Your Code
For the last two issues, I spent time reflecting on various lessons and concepts that Ive learned in my
two and a half decades as a software developer. This editorial officially turns this duo of articles into
a trilogy. For this issues treatise on software development, Ill explore an infrequently discussed yet
ever-important topic: technology choice as it applies to application maintainability.
There is an old saying among software developers: Always code as if the person who ends up
maintaining your code is a violent psychopath
who knows where you live.
Over the last two months, Ive encountered two
codebases in which I wish I could take on the role
of psychopath and stalk the homes of the developers who created these monstrosities. Whats
my issue with these particular codebases? Two
words: over engineering. Each of these projects
had a rather simple set of requirements but were
so over-engineered that they were practically unmaintainable.
Lets talk about each project individually.
The first project had a rather simple set of requirements:
Read two types of comma-separated files
(CSV) found in a specified directory.
Read each row from the file.
Transform and attach lookup data to each
record processed.
Create an Excel output file from the transformed data.
That seems like a pretty simple set of requirements, right?
The codebase for this project was a conglomeration of new trinkets and shiny bobbles, aggregated into a Rube Goldberg set of steps in order to
complete what seemed a simple set of tasks. The
project used reactive extensions to throw events
when files were added to a folder, and for each
type of file it processed, it started a new multithreaded task. It also used an aggregation pattern to collect exceptions, and so on and so forth.
My major problem was that it took a simple process and convoluted/abstracted it into an eventdriven spaghetti monster. It made more sense to
rewrite it than to try and make modifications to
the code.
The second application was a Web portal for a
mobile application. It had another simple set of
requirements:

Editorial

Provide an internal API for signing up users.


Provide an internal API for two types of
lookup lists.
Provide a Web interface for maintaining the
list of users.
Provide a Web interface for maintaining the
two sets of lookups
Note that the internal designation on the API calls
means that they wont be called by external users;
in other words, theyll never be public.
As you can see, it was another set of simple requirements. When I see this list of requirements,
ASP.NET/MVC seems like a great platform for such
a simple application. ASP.NET/MVC has a great
view engine, capabilities for returning data in
XML, JSON, and other formats, and is, in general,
a complete Web development stack. Of course,
this application was backed up by a database such
as SQL Server or MySQL (the clients choice).
I want to point out one more little item. This application is for a small startup with a very niche
process. It is not now, nor will it ever be, an application requiring Web scale. For those following
along at home, Web-scale means that it will grow
to be on the scale of Facebook, Twitter, Instagram
or many other large scale Web-based applications.
Most applications are many, many orders of magnitude smaller and should be designed accordingly. This is very important to understand; Ill
explain why further later on.
There are two major issues I found with this codebase. The first is that they used the ASP.NET
Web API as the back-end platform. The second is
that they used AngularJS as a platform for providing the portal to this application. You might
be a saying to yourself: Whats wrong with these
choices? On a technical level, theres nothing
wrong. ASP.NET Web API and AngularJS are fine
choices for building applications. The real question is: Are they necessary and what benefits do
they provide?
Heres where I find fault. ASP.NET MVC provides a
single platform thats plenty scalable for this application and has a well-vetted and powerful view
engine. Theres no need for AngularJS (another
language and stack to learn) when the tools are
readily available in a single tool.

So why did the developers choose this particular


Web stack? My guess is that there were two reasons. They heard about mobile applications and
app store and had dreams (or delusions) of Webscale. Lets stick a fork in that one. Not too many
applications require Web-scale and for developers
to spend the time and resources to attempt it is
borderline irresponsible. Youre probably NOT going to build an application requiring Web-scale in
your lifetime, so dont prematurely optimize for it.
Secondly they probably wanted to learn Web API and
Angular and had a client foot the bill for their learning curve. These are only guesses but Ive a feeling
that they might be have elements of truth to them.
For the record, this project went over budget and
over schedule and I believe the choices made by the
developers are partially responsible for that.

Consider the long-term


maintenance costs for any tool
you use to develop or youll
always be looking over your
shoulder for that psychopath.

In my initial statement, I said that I would cover technology choices as they apply to software maintainability. Hopefully, with the two examples provided,
you can see how two rather simple projects can become costly to understand (which is a pre-requisite
for maintainability) because the developers made
poor technological choices. Every time you choose
a particular set of tools, you need to consider the
long-term maintenance costs for that tool or set of
tools. Every new tool/technique increases the learning curve for the next developer (probably not you).
So my advice to you is: Keep your technology
footprint as small as possible (reduce the learning curve) and choose the right tool for the job
(just not the one you want to learn).



Rod Paddock

codemag.com

MANAGERS CORNER

Managers Corner:
The Long View
Nobody wants -inch drill bits, a marketing professor once told me. People want -inch holes.
Ill never forget those words. I use them as a reminder that I shouldnt be selling my customers what I
have, but providing them with what they need and want. A few years back, I lost a bid to a competitor
because the client wanted one-stop shopping.
It wasnt enough to develop a great website for
them. They wanted it built, hosted, monitored,
maintained, updated, and scaled. We didnt have
a data center and didnt want to get into the data
center business, so instead, I pitched a proposal
where we developed the software and theyd contract with a cloud provider for the hosting. It was
a less expensive option for the customer and gave
them full control, but it wasnt what they wanted.
They didnt just need a website, they needed a
broad solution and our competition did a better
job of delivering it. Looking back, it was shortsighted on my part and the loss of that contract
began shaping the approach to software development that I have today.
Today, we dont sell software; we sell long-term relationships to handle our clients system needs. Unlike software applications, these system needs dont
have a completion date. Once initial delivery of the
software is made, theres a new feature, a next project, a new requirement that needs to be addressed.
Businesses have to constantly adjust and evolve,
and so do their systems. The systems need to be
up and running and handle whatevers thrown at
them, and that requires proactive monitoring and
scaling. Delivering software to the cloud, usually
Microsoft Azure in our case, gave us the capabilities
we previously lacked. Hosting, monitoring, scaling, updates, and new features are all now part of
the products we offer. Its good for our customers
because theyre purchasing a solution where they
dont have to worry about all the care and feeding
of the solution, which frees them up to concentrate
on whatever they do best. Its good for us because
we have a steadier revenue stream, recurring revenue, and new sources of revenue.
These new capabilities have fundamentally
changed our understanding of what we do. Certainly, many new customers come to us looking
for us to write software, but thats becoming an
outdated notion. Upon further discussion with
our customers, the appeal of purchasing a solution including much more than software becomes
apparent. The idea of getting out of the IT business is appealing to more companies and, in addition to the shedding of some headaches, the savings it can offer is a powerful motivator.

I had to learn from a mistake. In my experience,


anyone who tells you that they know the future is
selling you a guess. What looks like predicting the
future of business is often just an early exposure
to new needs and sometimes just plain luck. The
key to staying successful and adapting to the future is to keep re-evaluating the needs of your clients and your own notions of how to deliver those
needs. We dont have to be geniuses to thrive in
the software business, but we do have to be smart
enough to see whats going on around us and
eventually adapt. If the past several years are any
indicator, theres a steady trend away from delivering software to a customer for them to run, and
toward delivering on-going services to help them
meet their needs. Todays product is a more comprehensive set of services that continually evolve
and grow at a fast pace.
The way to thrive today is to take the long view
of system development. No matter how good the
software is that you deliver, its not enough. You
have to develop it faster and less expensively, you
have to involve the customer throughout the process and allow them to steer the process based on
their ever-changing priorities. You have to deliver
wins early and often. Your customers cant wait
a year for a rollout with nothing to show in the
meantime. And you have to consider your product to be whatever aspects of the solution your
clients need it to be. Finally, you have to treat
the solution as an on-going process, not a project
with start and end dates. The software will need a
face-lift eventually and that needs to be planned
for so that its easy to do. New features will be
added. New platforms will be supported. Behavior
will change. New technologies will be incorporated. The companies that understand this and can
convey it to their clients are thriving and their
clients are thriving. The companies delivering the
same product they sold 15 years ago are fading
away.

SPONSORED SIDEBAR:
CODE Framework:
Free, Open Source,
and Fully Supported
CODE Framework is
completely free and open
source with no strings
attached. But that doesnt
mean that the framework is
unsupported! Contact us with
any questions; well never
charge you for answering
a few questions in email.
For those looking for more
sophisticated and handson support, we also offer
premium support options.
http://codeframework.
codeplex.com/

Take the long view of the software development


business. Reconsider what it is you have to offer.
Learn new things and offer more and your business
will thrive. Deliver -inch holes, not drill bits.



Mike Yeager

I wish that Id been pro-active in seeing and capitalizing on this trend, but as is often the case,

Managers Corner

codemag.com

ONLINE QUICK ID 1511031

CRUD in HTML, JavaScript and jQuery


As developers, were always asked to do more for our users. They want their Web pages faster, smaller, and with more features.
This means that you have to start working more in JavaScript and jQuery on the client-side. By doing more client-side coding,
you reduce post-backs to the server, thereby increasing performance. In this first article of a series on working within HTML and
the Web API, Ill show you how to add, edit, and delete
data in an HTML table using JavaScript and jQuery, but
no post-backs. In subsequent articles, youll learn how to
take that data and use the Web API to retrieve and modify
this data.

Paul D. Sheriff
PSheriff@pdsa.com
www.PDSAServices.com
Paul D. Sheriff is the President
of PDSA, Inc. PDSA develops
custom business applications
specializing in Web and mobile
technologies. PDSA, founded
in 1991, has successfully
delivered advanced custom
application software to a wide
range of customers and diverse
industries. With a team of
dedicated experts, PDSA delivers cost-effective solutions,
on-time and on-budget, using
innovative tools and processes
to better manage todays complex and competitive environment. Paul is also a Pluralsight
author. Check out his videos at
http://www.pluralsight.com/
author/paul-sheriff.

To demonstrate the concepts for this article, I created a


page called Pauls Training Videos (Figure 1). This page
allows me to list all of my training videos on Pluralsight,
and eventually, to add, edit, and delete them. Im using
Bootstrap in order to have a nice appearance for my Web
page, but its not required.

Add a Product
The HTML page I use to illustrate these concepts is
shown in Listing 1. This HTML creates an empty table
with a <thead> for the headers of each column in the
table. There are three columns: product name, introduction date, and URL. Note that theres no <tbody> for this
table. Ive left the <tbody> element off on purpose to illustrate how you check for that in jQuery, and add the
<tbody> element, if necessary.

Add Product Rows to the Table

At the bottom of this HTML page, I created a <script>


tag with a function called productsAdd(). This function uses the append() method to add a <tr> with
the appropriate number of <td> tags to form a row for
your table. Use a jQuery selector to locate the ID attribute of your table and the <tbody> tag, and append a
<tr> and the <td> tags as shown in the following code
snippet.
function productsAdd() {
$("#productTable tbody").append(
"<tr>" +
"<td>My First Video</td>" +
"<td>6/11/2015</td>" +
"<td>www.pluralsight.com</td>" +
"</tr>"
);
}

The HTML shown in Listing 1 didnt include a <tbody> tag


within the table. If you run the code shown in the snippet
above, no rows are appended to your table because the
result of the selector $(productTable tbody) comes
back as null. You should always have a <tbody> in all
of your tables. If you dont, its no problem because you
can add one programmatically. Modify the code shown
above to check to see if the selector of your table ID and
the <tbody> returns something. If it doesnt, append a
<tbody> tag to the table prior to calling the append()
with the rows you wish to add. You do this with the following code.
if ($("#productTable tbody").length == 0) {
$("#productTable").append("<tbody></tbody>");
}

The complete method to add a couple of rows to the HTML


table is shown in Listing 2
You can call the above function when the document is
loaded by adding the jQuery document.ready function
just before the ending </script> tag.
$(document).ready(function () {
productsAdd();
});

Add Rows Dynamically

Lets make the page a little more dynamic by gathering


the product data from the user and adding that data to
the table. Add three input fields for the user to input
data to add to the product table. The user enters a product name, introduction date, and the URL of the video,
as shown in Figure 2. After entering the data into these
fields, that data is retrieved from the input fields and
added to a new row in the HTML table.
In addition to the new input fields, a <button> is added
that when clicked, adds the data from the fields into the
table. This button, shown at the bottom of Figure 2, is a
normal HTML button with a function named productUpdate called from its onclick event.

Listing 1: HTML Table Markup


<div class="container">
<div class="row">
<div class="col-sm-6">
<h2>List Products</h2>
</div>
</div>
<div class="row">
<div class="col-sm-6">
<table id="productTable"
class="table table-bordered
table-condensed table-striped">

10

CRUD in HTML, JavaScript and jQuery

<thead>
<tr>
<th>Product Name</th>
<th>Introduction Date</th>
<th>URL</th>
</tr>
</thead>
</table>
</div>
</div>
</div>

codemag.com

<button type="button" id="updateButton"


class="btn btn-primary"
onclick="productUpdate();">
Add
</button>

Add a Row from the Input Fields

Once the user adds the data in the input field, they click
on the Add button. In response to this click event, the
productUpdate() function is called, as shown in the following code snippet.
function productUpdate() {
if ($("#productname").val() != null &&
$("#productname").val() != '') {
// Add product to Table
productAddToTable();
// Clear form fields
formClear();

Figure 1: List products by adding rows to a table when the page loads.
// Focus to product name field
$("#productname").focus();
}
}

If the Product Name field is filled in with some data, then


the productAddToTable function is called to build the
new row for the table. Once this function is run, formClear() is called to clear the input fields to prepare for
the next row to be added. Finally, input focus is given to
the Product Name Input field.
The productAddToTable() function, shown in Listing 3,
is similar to the code I wrote earlier when I hard-coded
the values. The difference in this method is that it uses
jQuery to retrieve the values from the text boxes and
build the <td> elements from those values.
The formClear() function uses a jQuery selector to find
each input field and set the value of each to a blank
string. Setting the value to a blank clears the input field
so that the user can enter new data.
function formClear() {
$("#productname").val("");
$("#introdate").val("");
$("#url").val("");
}

Delete a Product
Once youve added a few products, youll most likely need
to delete one or more of those products. Add a Delete
button to each row of the table, as shown in Figure 3.
This requires you to modify the <thead> element by adding a new <th> element with the word Delete, as shown
in the following snippet.

Figure 2: Add products to table with user input.

12

CRUD in HTML, JavaScript and jQuery

<table id="productTable"
class="table table-bordered
table-condensed table-striped">
<thead>
<tr>
<th>Product Name</th>
<th>Introduction Date</th>

codemag.com

Listing 2: JavaScript function to add rows to an HTML table


<script>
// Add products to <table>
function productsAdd() {
// First check if a <tbody> tag exists, add one if not
if ($("#productTable tbody").length == 0) {
$("#productTable").append("<tbody></tbody>");
}

"<td>http://bit.ly/1SNzc0i</td>" +
"</tr>"
);
$("#productTable tbody").append(
"<tr>" +
"<td>Build your own Bootstrap Business
Application Template in MVC</td>" +
"<td>1/29/2015</td>" +
"<td>http://bit.ly/1I8ZqZg</td>" +
"</tr>"
);

// Append product to the table


$("#productTable tbody").append(
"<tr>" +
"<td>Extending Bootstrap with CSS,
JavaScript and jQuery</td>" +
"<td>6/11/2015</td>" +

}
</script>

Listing 3: Retrieve values from input fields and build a row for a table
function productAddToTable() {
// First check if a <tbody> tag exists, add one if not
if ($("#productTable tbody").length == 0) {
$("#productTable").append("<tbody></tbody>");
}

"<tr>" +
"<td>" + $("#productname").val() + "</td>" +
"<td>" + $("#introdate").val() + "</td>" +
"<td>" + $("#url").val() + "</td>" +
"</tr>"
);

// Append product to the table


$("#productTable tbody").append(

Listing 4: Build a delete button dynamically in your JavaScript code


function productAddToTable() {
// First check if a <tbody> tag exists, add one if not
if ($("#productTable tbody").length == 0) {
$("#productTable").append("<tbody></tbody>");
}
// Append product to the table
$("#productTable tbody").append(
"<tr>" +
"<td>" + $("#productname").val() + "</td>" +
"<td>" + $("#introdate").val() + "</td>" +
"<td>" + $("#url").val() + "</td>" +

<th>URL</th>
<th>Delete</th>
</tr>
</thead>
</table>

Add a Delete Button to Each Row

Modify the productAddToTable() function (Listing 4) to


include a button control as you add each row of data. In
the JavaScript you write to build the <tr> and the <td>
elements, add one more <td> that includes the definition
for a <button> control. This button control uses some
Bootstrap classes for styling and a Bootstrap glyphicon
to display an X to symbolize the delete function (see
Figure 3). The button also needs its onclick event to call
the function productDelete(). To this function, pass
the keyword this, which is a reference to the button
itself.

Delete a Row

The deleteProduct() function accepts the parameter


thats a reference to the delete button. From this con-

codemag.com

"<td>" +
"<button type='button' " +
"onclick='productDelete(this);' " +
"class='btn btn-default'>" +
"<span class='glyphicon
glyphicon-remove' />" +
"</button>" +
"</td>" +
"</tr>"
);
}

trol, you can use the jQuery function parents() to locate the buttons containing <tr> tag. Once you locate
the <tr> tag, use the remove() function to eliminate
that row from the table, as shown in the following code:
function productDelete(ctl) {
$(ctl).parents("tr").remove();
}

Edit a Product
Youve learned how to add and delete rows from an HTML
table. Now, turn your attention to editing rows in an
HTML table. Just like you added a Delete button to each
row in your table, add an Edit button as well (Figure 4).
Once more, you need to modify the <thead> element by
adding a new <th> element with the word Edit, as shown
in the following code snippet.
<table id="productTable"
class="table table-bordered
table-condensed table-striped">

CRUD in HTML, JavaScript and jQuery

13

<th>Delete</th>
</tr>
</thead>
</table>

<thead>
<tr>
<th>Edit</th>
<th>Product Name</th>
<th>Introduction Date</th>
<th>URL</th>

Adding a Row with an Edit Button

Just like you built a button in JavaScript for deleting


a row, you build a button for editing too (Listing 5).
The onclick event calls a function named productDisplay(). Youll pass in the keyword this to the productDisplay() function so you can reference the edit button
and thus retrieve the row of data in which this button is
located.

Display Data in Input Fields

When the user clicks on the Edit button in the table, store
the current row of the table in a global variable. Define a
variable called _row within the <script> tag but outside
of any function, so its available to any function within
this page.
<script>
// Current product being edited
var _row = null;
</script>

In Listing 5, this was passed into the productDisplay()


function from the onclick event of the button. This is a
reference to the Edit button control. Write the productDisplay() function to calculate the current row by getting the <tr> tag thats the parent for the Edit button.
This is accomplished using the following jQuery selector.
_row = $(ctl).parents("tr");

Retrieve all the <td> columns in an array from the current


row using the children() function of the _row variable.
var cols = _row.children("td");

Use the appropriate columns in the table to retrieve each


input field, such as product name, introduction date, and
URL. The val() function is used to place the data into
each text box from each column of data. Finally, so you
know that youre in Edit mode as opposed to Add mode,
change the text of the updateButton to Update. The
complete productDisplay() function is shown in the following code.
function productDisplay(ctl) {
_row = $(ctl).parents("tr");

Figure 3: Add a Delete button to allow a user to delete a row from the table.
Listing 5: Build an Edit button in JavaScript
function productBuildTableRow() {
var ret =
"<tr>" +
"<td>" +
"<button type='button' " +
"onclick='productDisplay(this);' " +
"class='btn btn-default'>" +
"<span class='glyphicon glyphicon-edit' />" +
"</button>" +
"</td>" +
"<td>" + $("#productname").val() + "</td>" +
"<td>" + $("#introdate").val() + "</td>" +

14

CRUD in HTML, JavaScript and jQuery

"<td>" + $("#url").val() + "</td>" +


"<td>" +
"<button type='button' " +
"onclick='productDelete(this);' " +
"class='btn btn-default'>" +
"<span class='glyphicon glyphicon-remove' />" +
"</button>" +
"</td>" +
"</tr>"
return ret;
}

codemag.com

var cols = _row.children("td");


$("#productname").val($(cols[1]).text());
$("#introdate").val($(cols[2]).text());
$("#url").val($(cols[3]).text());
// Change Update Button Text
$("#updateButton").text("Update");
}

Updating the Data

When the user clicks on the updateButton, the product


Update() function is called. What the text is in the updateButton determines whether or not youre adding a
row of data to the table or editing an existing one. Remember, when you click on the Edit button, it changes
the text of the Update button. Modify the productUpdate() function to check the text in the updateButton
and perform the appropriate function based on that text,
as shown in the following code.
function productUpdate() {
if ($("#updateButton").text() == "Update") {
productUpdateInTable();
}
else {
productAddToTable();
}
// Clear form fields
formClear();
// Focus to product name field
$("#productname").focus();
}

There are a couple of ways you can update a product. You


already saved the row in the _row variable, so you can
reference each cell individually in that row and update
each cell in the table using the val() function of each
input field. Another method is to add the changed data to
the row immediately after the current row, then delete the
current row from the table. Ive chosen to use the latter,
as this allows me to reuse the productBuildTableRow()
function written earlier. The last thing to do is to clear the
input fields, and change the text back to Add on the Update button. The updating of the product table is shown
in the code below.
function productUpdateInTable() {
// Add changed product to table
$(_row).after(productBuildTableRow());
// Remove old product row
$(_row).remove();
// Clear form fields
formClear();
// Change Update Button Text
$("#updateButton").text("Add");
}

Use Data Dash Attributes


In this article, Ive been concentrating on working with

codemag.com

Figure 4: Add an Edit button to allow a user to edit a single row in the table.
client-side code. At some point, you are going to have to
send the data back to the server and retrieve data from it
as well. Most of us assign a primary key (unique number)
to each row of data. Lets now modify the page to use
data- attributes to keep track of primary keys on the rows
of data in the HTML page.

Add Two Variables

You need to create two new global variables in your HTML


page; _nextId and _activeId. These are used to keep
track of the next ID to assign a new added row and to
keep track of the current ID of the row youre editing. The
code to do this is shown below.
<script>
// Next ID for adding a new Product
var _nextId = 1;

CRUD in HTML, JavaScript and jQuery

15

Listing 6: Use data- attributes to hold the primary key for each row
function productBuildTableRow(id) {
var ret =
"<tr>" +
"<td>" +
"<button type='button' " +
"onclick='productDisplay(this);' " +
"class='btn btn-default' " +
"data-id='" + id + "'>" +
"<span class='glyphicon glyphicon-edit' />" +
"</button>" +
"</td>" +
"<td>" + $("#productname").val() + "</td>" +
"<td>" + $("#introdate").val() + "</td>" +

Sample Code
You can download the sample
code for this article by visiting my
website at http://www.pdsa.com/
downloads. Select PDSA Articles,
then select Code Magazine
CRUD in HTML from the dropdown list.

"<td>" + $("#url").val() + "</td>" +


"<td>" +
"<button type='button' " +
"onclick='productDelete(this);' " +
"class='btn btn-default' " +
"data-id='" + id + "'>" +
"<span class='glyphicon glyphicon-remove' />" +
"</button>" +
"</td>" +
"</tr>"
return ret;
}

// ID of Product currently editing


var _activeId = 0;
</script>

You can delete the _row variable that you created earlier,
as youre now going to use these ID variables to keep
track of which row is being added or edited.

Building a Row for the Table

These two new variables are used to build the row to add
or update in your HTML table. Lets modify the productBuildTableRow() function (Listing 6) to accept a parameter called id to which you pass either of those two variables. This unique number is added into a data- attribute
on both the Edit and the Delete buttons.

Update a Row

When the user clicks on the updateButton, the productUpdate() function is still called. However, you need to
modify this function to use the _activeId variable and
pass that value to the productUpdateInTable() function
you wrote earlier.
function productUpdate() {
if ($("#updateButton").text() == "Update") {
productUpdateInTable(_activeId);
}
else {
productAddToTable();
}
// Clear form fields
formClear();

Getting the Current ID

When the user clicks on the Edit button in the table, call
the productDisplay() function passing in the Edit button. Extract the data- attribute containing the unique ID
and assign it to the _activeId variable, as shown in the
following code.
function productDisplay(ctl) {
var row = $(ctl).parents("tr");
var cols = row.children("td");
_activeId = $($(cols[0]).
children("button")[0]).data("id");
$("#productname").val($(cols[1]).text());
$("#introdate").val($(cols[2]).text());
$("#url").val($(cols[3]).text());
// Change Update Button Text
$("#updateButton").text("Update");

// Focus to product name field


$("#productname").focus();
}

Change the productUpdateInTable() function to find


the row in the table that contains this unique ID. This
function uses that ID to locate the button that contains
the data- attribute within your table. The code snippet below shows just the changed lines in this updated
function.
function productUpdateInTable(id) {
// Find Product in <table>
var row =
$("#productTable button[data-id='"
+ id + "']")
.parents("tr")[0];

}
// Add changed product to table
$(row).after(productBuildTableRow(id));
// Remove original product
$(row).remove();

Since the Edit button is in the first column of the row


clicked on, retrieve the ID from that button using this line
of code from the above code snippet.

16

CRUD in HTML, JavaScript and jQuery

...

_activeId = $($(cols[0]).
children("button")[0]).data("id");

The jQuery .data() function is passed the suffix of the


data- attribute to retrieve. Because you used data-id as
the key for your data- attribute, you simply pass id to
the .data() function and it returns the value assigned to
that attribute.

The jQuery selector uses the ID parameter passed into this


function. So the selector might look like $(#productTable button[data-id=2]). This selector returns the
unique row in the table for the product that has a button with a data-id equal to 2. Once you locate the row,

codemag.com

replace that row in the table with the contents that the
user entered in the input fields on this form.

Adding a New Row

Each time you add a new row to the table, place the value
in the _nextId variable into the data- attributes for the
new row. Change the productAddToTable() function as
shown below.
function productAddToTable() {
// Does tbody tag exist? add one if not
if ($("#productTable tbody").length == 0) {
$("#productTable")
.append("<tbody></tbody>");
}

ing these skills makes your Web pages more responsive


and leads you to the next step, which is to use the Web
API to send the modified data back to the server. In fact,
in my next article, this is exactly what youll learn to do.
The nice thing about using these techniques is that you
dont have to post back the complete page and have the
complete page redrawn just to get new records or modify
records. You simply send the data back and forth and redraw those portions of the page that have changed. This
saves time and transfers less data across the Internet.
This is very important on mobile devices that may not be
connected to WiFi, as you dont want to use your users
data minutes transferring a lot of the same HTML when all
you need is just a little bit of data.



// Append product to table


$("#productTable tbody").append(
productBuildTableRow(_nextId));

Paul D. Sheriff

SPONSORED SIDEBAR:
Need Help?
Looking to convert your
application to a new
language, or need help
optimizing your application?
The experts at CODE
Consulting are here to
help with your project
needs! From desktop to
mobile applications, CODE
developers have a plethora
of experience in developing
software and programs
for different platforms. For
more information visit www.
codemag.com/consulting or
email us at info@codemag.com.

// Increment next ID to use


_nextId += 1;
}

Notice how you pass the _nextId variable to the productBuildTableRow() function. After this new product row is
created, increment the _nextId variable so that the next
product you add gets the next unique ID.

Summary
In this article, you learned to create Add, Edit, and Delete
rows in an HTML table using JavaScript and jQuery. Hav-

ADVERTISERS INDEX

Advertisers Index
/n software

www.nsoftware.com

1&1 Internet, Inc.


www.1and1.com

11

Apps World Series


www.apps-world.net

75

Aspose

www.aspose.com

CODE Consulting

www.codemag.com/consulting

21

CODE Divisions

www.codemag.com

76

CODE Framework

www.codemag.com/framework

31

CODE Staffing

www.codemag.com/staffing

69

dtSearch

www.dtSearch.com

47

Hibernating Rhinos Ltd.


http://ravendb.net

www.izenda.com

Izenda

LEAD Technologies

www.leadtools.com

38

SXSW Interactive

www.sxsw.com

59

Advertising Sales:
Tammy Ferguson
832-717-4445 ext 026
tammy@codemag.com

This listing is provided as a courtesy to


our readers and advertisers.
The publisher assumes no responsibility
for errors or omissions.

codemag.com

CRUD in HTML, JavaScript and jQuery

17

ONLINE QUICK ID 1511041

Legal Notes: Trade Secrets


Youve heard of the intellectual property law concepts called trademarks, copyrights, and patents. Theres a fourth concept and
its one that is unknown, forgotten about, or completely misunderstood. That fourth concept is the trade secret. Some regard
it as a catch-all category that could apply when the other concepts either dont fit or arent feasible. To some degree, thats true.
However, its important to note that trade secrets are not
the left over stuff that remains from what doesnt apply to
the other conceptsspecifically patentsbecause trade
secrets tend to be processes, methods, recipes, etc. that
arent known to the outside world. Trademarks and copyrights, by their very nature, arent secret. Rather, theyre
exposed to the world as branding and some tangible expression, respectively.

John V. Petersen
johnvpetersen@gmail.com
codebetter.com/johnpetersen
@johnvpetersen
John is a graduate of the Rutgers University School of Law
in Camden, NJ and has been
admitted to the courts of the
Commonwealth of Pennsylvania
and the state of New Jersey.
For over 20 years, John has
developed, architected, and
designed software for a variety
of companies and industries.
John is also a video course author for WintellectNOW. Johns
latest video series focuses on
legal topics for the technologist. You can learn more about
this series here: http://bit.ly/
WintellectNOW-Petersen. If you
are new to WintellectNOW, use
promo-code PETERSEN-14 to
get two weeks of free access.

Two of the most famous trade secrets are the Colonels


11 herbs and spices from Kentucky Fried Chicken and the
Coca Cola formula, two of the most famous and consumed
brands in the world. We know how they taste. What we
dont know is the formulas. KFC (specifically YUM Brands)
and the Coca Cola Company go to great lengths to keep
these formulas secret. Ill elaborate more on that in the
section below on what qualifies as a trade secret.

For a secret process or


method to be regarded as a
trade secret and to get such
protections, it has to be more
than a secret. You must take
sufficient safeguards
to maintain the secret,
the components of the
secret must not be generally
known, and the secret
must provide a unique
competitive advantage.

What about the technical world? Do trade secrets exist?


Indeed they do and one of the most prevalent secrets is
Googles search engine algorithm. Many of Amazons processes and methods are trade secrets as well. You may
have some technology-based process or method that isnt
known and gives you a competitive advantage. If thats
the case, you may have a trade secret thats in your best
interest to protect.
Now that you have an idea of some common examples of
trade secrets, lets dive into more detail to better define
what trade secrets are, how they differ from patents and
the governing law around them.
DISCLAIMER: This and future columns should not be
construed as specific legal advice. Although Im a lawyer, Im not your lawyer. The column presented here is
for informational purposes only. Whenever youre seeking legal advice, your best course of action is to always
seek advice from an experienced attorney licensed in your
jurisdiction.

18

Legal Notes: Trade Secrets

What Qualifies as a Trade Secret and


the Uniform Trade Secrets Act
The following list outlines the general requirements for
something to be treated as a trade secret:
Information, including a formula, pattern, compilation, program, device, method, technique, or process
Something that derives independent economic
value, actual or potential, from not being generally
known to or readily ascertainable through appropriate means by other persons who might obtain economic value from its disclosure or use and
Something thats the subject of efforts that are
reasonable under the circumstances to maintain its
secrecy
The first bullet point illustrates how close trade secrets
and patents are, as the list, with the exception of information describes patentable subject matter. As the list
states, bare information (data, statistics, etc.) can be a
trade secret. Clearly, any software program or business
process can be a trade secret.
The second bullet point actually refers to multiple related
items. There must be actual or potential economic value
because of not being generally known or readily ascertainable through appropriate means. There are several
things you can take from this. First, just because something is a secret doesnt make it a trade secret. Rather,
the fact that its a secret itself gives rise to some actual
or potential economic value. Those two things go hand in
hand. The term appropriate is meant to convey lawfulness. Another example of a trade secret is a customer list.
Chances are, an employee or such other person signed
an agreement that states among other things that such
information is deemed to be proprietary and a trade secret such that its unauthorized dissemination outside
the organization would result in irreparable economic
harm. The emphasized text is an example of an expression that you might encounter in an employment or a
non-disclosure agreement. Such agreements are examples of reasonable efforts to maintain secrecy and that
gets us to the third bullet point.
For a secret process or method to be regarded as a trade
secret and to get such protections, it has to be more than
a secret. One must take sufficient safeguards to maintain
the secret, the components of the secret must not be
generally known, and the secret must provide a unique
competitive advantage. This is where non-disclosure
agreements (NDAs) and other like agreements come into
play. In the case of Kentucky Fried Chicken, there are only
two vendors that make up the seasoning mix. The vendors dont know the identity of the other and each only
makes 50% of the mix. Further, only a very small number
of people know the recipe, which is kept locked in a safe.

codemag.com

The same goes for the Coca Cola formula. Often, companies require any visitor to their premises to sign an NDA
that forbids any disclosure of anything the visitor sees on
the premises. All of this may seem over-reaching and draconian. Nevertheless, these efforts have been regarded as
reasonable and necessary for the protected subject matter to be regarded as a trade secret.

Why Does It Matter to be Regarded as a Trade Secret?

Its all about legal standing and the Uniform Trade Secrets
Act (UTSA). The standard UTSA text can be found here:
http://www.uniformlaws.org/shared/docs/trade%20secrets/utsa_final_85.pdf. Statutes, whether they are state
or federal, enumerate both the law and the remedies a
plaintiff may be entitled to should they seek relief under
the statute. Before you can seek statutory relief, the statute must first apply. Whether a statute applies is a burden
for the plaintiff to carry. For example, lets assume you
have some process, method, or program that you regard
as a trade secret and you believe that somebody has misappropriated a trade secret. What damages are you entitled to? That depends in part on whether the UTSA, as
adopted by your state, applies. This is where we go back
to the three factor test outlined above.
Note: Most, but not all, states have adopted the UTSA.
New York, North Carolina, and Massachusetts havent
adopted the UTSA. Nevertheless, these states have laws
on the books that adopt much of the operative language.
Texas is the most recent state to adopt the UTSA; which
it did in 2013.
What if you are, or have the potential to be, a defendant
in a law suit? Inevitably, the other side, through a court
order, will demand that you produce certain documents,
including complete interrogatories, and perhaps submit
to a deposition. What if a portion of the information
sought is something you consider a trade secret? To prevent disclosure, the defendant bears the burden to establish that the information sought is a trade secret. What if
the action is around the information itself? For example,
what if the plaintiff alleges that the trade secret was misappropriated by the defendant? This presents an interesting situation because it would seem the defendant would
have no choice but to disclose the trade secret details as
well as information concerning how they developed the
secret. As it turns out, the answer here is not always clear.
First, trade secrets are a matter of state law. That means
that although each state may have adopted the UTSA, the
possibility always exists that from state to state, there
can be some variation, no matter how slight, over what
defines a trade secret. Second, each state has its own set
of court decisions (case law) that interprets its states
trade secret act. In the case of Pennsylvania, where I live,
in 2003, the Third Circuit Court of Appeals ruled that a
plaintiff bears the burden that the defendant did not independently develop the alleged misappropriated trade
secret. Does that mean the defendant doesnt have any
burden? The answer is no, but the burden that the defendant carries is light. All the defendant has to produce is
some evidence that the trade secret was independently
developed. Once that burden is carried, the burden then
shifts to the plaintiff. For more information on the Moore
v. Kulicke Soffa Industries, Inc. case (a good example of
this burden-carrying), see http://caselaw.findlaw.com/
us-3rd-circuit/1361597.html.

codemag.com

Another reason why the UTSA is important to apply is that


if youre a plaintiff and you prevail, youre normally entitled to legal costs and fees over and above your other
damages that may include, but are not limited to, lost
revenue and profits.
Note: Dont get confused by the fact that this PA case was
argued in federal court. Often, state law cases are argued
in federal court. Whether a case arising under state law
can be heard in federal court depends on whether there
is a basis for federal subject matter jurisdiction. There are
two ways a state law case can be heard in federal court:
The amount in controversy exceeds $75,000
There is diversity in citizenship (citizens of different states)
Theres a bit more to it but thats beyond the scope of
this article. If youre interested in more detail, that can
be found here: https://www.law.cornell.edu/uscode/
text/28/1332.
In fact, theres no federal trade secrets act. Thats not to
say that trade secrets have nothing to do with federal law.
Theres the federal espionage act that addresses the theft
of trade secrets (https://www.law.cornell.edu/uscode/
text/18/part-I/chapter-90). There are also federal rules
that regulate the conduct of public officials and employees (https://www.law.cornell.edu/uscode/text/18/partI/chapter-93). Both of these, when it comes to defining
what a trade secret is, refer back to the UTSA. When it
comes to general enforcement of trade secrets outside of
specific federal law that may have as a component some
trade secret aspect, thats still a matter of state law and
specifically, a states adoption of the UTSA or its own
trade secret law (whether by statute and/or case law).
Lately, its been a hot topic in Congress to pass a federal
trade secrets act that would supplant individual state law.
That issue is discussed in the sidebar.
The key take away is that when a state adopts the UTSA,
theres a bias toward protecting secrets. The UTSAs benefit is that it provides standard definitions and a statutory
framework to resolve disputes arising under a trade secret
theory. As an individual or an organization that may possess a trade secret, its important that you understand
and are familiar with the legal framework around trade
secrets.

Trade Secrets or Patents:


Which One Should You Choose?
As you might guess, the simplest answer is that it depends, which isnt much of an answer. Patents require
disclosure, which is a completely antithetical concept to
trade secrets. If a trade secret is involuntarily disclosed,
assuming there was no misappropriation, the trade secret
is lost and the subject matter is in the public domain,
free for anyone to use. In the United States, a patent is
good for 20 years from the date the patent application
is filed. Trade secrets have the potential to last forever.
The Coca Cola formula is over 100 years old. Theres a
benefit with patents, which is the limited monopoly you
if a patent is approved. If youre a patent holder, you can
prevent others from using the methods covered under the

Proposed Federal Trade


Secrets Act
It often surprises many when its
learned that theres no standard
federal trade secrets act. Part of
that confusion comes from the
fact that there is a Uniform Trade
Secrets Act (UTSA). Many believe
that this is a federal trade secret
law. It isnt. Rather, its a model
template that has been adopted
by 47 states. Although the UTSA
text has been incorporated into
certain federal statutes, theres no
single federal law that addresses,
on its own, trade secrets.
Hopefully, this will change as
Congress is now considering
such a federal law. In 2014, the
Trade Secrets Act of 2014 (H.R.
5233) was introduced. The acts
text can be found here: https://
www.congress.gov/bill/113thcongress/house-bill/5233.
In 2013, the Obama
Administration commissioned a
report on strategies that would
combat trade secret theft, the
text of which can be found here:
https://www.whitehouse.gov/
sites/default/files/omb/IPEC/
admin_strategy_on_mitigating_
the_theft_of_u.s._trade_secrets.
pdf.
As you might assume, theres
opposition to such a federal law.
The following is an opposition
letter from a group of law
professors: http://infojustice.org/
wp-content/uploads/2014/08/
Professor-Letter-Opposing-TradeSecret-Legislation.pdf.
The following link references an
interview with Jim Pooley who
is one of the foremost experts
on trade secret law: http://www.
ipwatchdog.com/2015/02/16/
congress-expected-to-take-upfederal-trade-secret-legislationin-2015/id=54605/.

Legal Notes: Trade Secrets

19

patent, even if somehow the other party independently


came up with the idea with no knowledge or input from
your method. The same is not true for a trade secret. For
example, you may have a trade secret and you want to
sue another entity for using methods and processes that
make up your trade secret. Unless that other party misappropriated your trade secret, youre out of luck. Obtaining a patent can be a lengthy and costly process. Trade
secrets for the most part have no costs other than the
costs to protect the secret.

A patent is good for


20 years from the date
the application is filed.
Trade secrets can
last forever.

So, which one should you choose? Look at the Google


search algorithm as an example. Google, most likely,
wants to keep its competitive advantage for more than 20
years. It already has a near monopoly on Internet traffic.
Theres nothing to be gained from disclosure. But patents are easier to enforce than trade secrets. In the USA,
patents are a matter of federal law whereas trade secrets
vary from state to state. Internationally, theres also more
uniformity with patents. As with many things under the
law, the best decision tends to be more about what constitutes the better business decision.

Summary
It should be fairly apparent now that trade secrets are an
important aspect of intellectual property law and business. If you think you have one, the burden is on you to
protect it. Things like NDAs and employment agreements
are reasonable measures to help protect such information, processes, and methods. At the same time, you also
should take practical measures to control access as to
who has direct contact and knowledge of these secrets.
Be sure to consult your locality, whether thats your
states or countrys laws around trade secret protections.
In the USA, although there is the Uniform Trade Secrets
Act (UTSA), theres currently no standard trade secret enforcement. The UTSA is simply a model template.

In the News: The Latest on the Google/Oracle Java API Case

Theres an update to the much publicized Google/Oracle


Java API case. In an earlier article, I wrote about the Federal Circuit Courts ruling that the structure and method
signatures in an API were copyrightable subject matter.
Its well settled that the code inside a method is copyrightable subject matter. The idea that an APIs name
itself is copyrightable should give cause for concern.
This means, at least in theory, that only one entity could
copyright a Customer API. If you wanted to also have a
Customer API, youd have to seek permission from the
copyright owner and license the right to use the name.
The Federal Circuit Court, which retains sole intermediate
appellate jurisdiction over such matters, overruled the
Ninth Circuit! The next step was an appeal to the Supreme
Court of the United States (SCOTUS). That gets us to the
latest news.

20

Legal Notes: Trade Secrets

On July 29th 2015, the SCOTUS declined to hear the case.


This decision was based in part on the Department of Justice (DOJ) rendering its opinion agreeing with the Federal
Circuit.

Whats next?

The case isnt over yet because Google will attempt to


assert the affirmative defense of fair use. This process
had already started when the SCOTUS was asked to hear
the case. You can find my earlier article on fair use here:
http://www.codemag.com/article/1503031. In that article, I wrote my initial discussion on the Google/Oracle
Java API case. Now that the SCOTUS has declined to hear
the case, the district court in the Ninth Circuit can hear
the fair use case.
To review, the Doctrine of Fair Use consists of a four-factor
analysis:
The purpose and character of the use, including
whether such use is of commercial nature or is for
nonprofit educational purposes
The nature of the copyrighted work
The amount and substantiality of the portion used
in relation to the copyrighted work as a whole
The effect of the use upon the potential market for,
or value of, the copyrighted work

Why This Case Matters

Oracle, as reported in this article, http://arstechnica.


com/tech-policy/2015/08/all-android-operating-systems-infringe-java-api-packages-oracle-says/, declares
that all Android OSs infringe on Oracles Java API Packages and Oracle is asking a judge to block continued
acts of infringement. The Android ecosystem is massive
and this request, if acted on to Oracles satisfaction, will
adversely affect many people and organizations. This includes thousands of devices. If you use a ChromeBook or
Android-based smart phone, you will be affected. What if
youre developer? As far as Oracle is concerned, YOU are
an infringer.
The question is not a matter of if, but when this case
will settle. Based on the court decisions rendered thus far
and applying the facts to the four-factor test, I dont see
Google winning on its fair use claim. At the same time,
there are the costs of disrupting the Android ecosystem
and the many ripple-effects therefrom. At some point, the
parties (Oracle and Google) are going to need to hammer
out a mutually beneficial settlement that would include
a licensing agreement. Lets hope that such a settlement
will provide protections for the many individuals and organizations in the Android Community.



John V. Petersen

codemag.com

codemag.com

Title article

21

ONLINE QUICK ID 1511051

TypeScript: The Best Way to


Write JavaScript
In my previous article for CODE Magazine (http://www.codemag.com/Article/1509071), I outlined numerous interesting
challenges that you deal with when writing JavaScript. JavaScript is a very popular language, and you need to learn it; you need
to be very good at it. If anything, the last article made it amply clear that writing reliable and bug-free JavaScript is very difficult.
The language has too many idiosyncrasies and loose ends
to allow you to write reliable code for every circumstance.
And although I think I did a pretty good job of outlining
all the mistakes you can make in JavaScript, the reality is,
when writing codeespecially in the heat of a deadline
its all too easy to make mistakes.

NameMalik
Sahil
Autor
www.winsmarts.com
www.internet.com
@sahilmalik
asdfasdfasdfasdfker, a .NET
Sahil
Malik
is a Microsoft
MVP,
author,
consultant
and trainer.
INETA speaker, a .NET author,
Sahilasfasdfasdfasdfasdfasdfdconsultant,
and trainer.
fainings are full of humor and
Sahil
loves
interacting
withfind
felpractical
nuggets.
You can
low
realtrainings
time. Hisattalks
moregeeks
aboutinhis
and
trainings are full of humor
http://wwasdfasdfasfasdfasdf
and practical nuggets. You can
find more about his training at
http://www.winsmarts.com/
training.aspx.
His areas of expertise are crossplatform mobile app development, Microsoft anything, and
security and identity.

The fact remains: You need to be excellent at JavaScript.


You need to find ways to write reliable code using JavaScript. Here are three steps to JavaScript heaven:
1. Use transpilers. My choice is TypeScript.
2. Use TDD.
3. Use good design patterns, such as MVC, and frameworks that encourage that, such as AngularJS.
In this article, Ill focus on TypeScript. In subsequent articles, Ill focus on TDD and AngularJS. There are plenty of
tutorials available and articles teaching you the basics of
these three steps, so Ill assume that youre familiar with
these technologies. Instead of focusing on What is TypeScript, Ill instead focus on Why TypeScript is necessary.
If youve never tried TypeScript, I encourage you to watch
this video on Channel9 https://channel9.msdn.com/
events/Build/2014/3-576. In that video, Anders Hejlsberg talks about the basic architecture, syntax, and some
applications of TypeScript.
Beyond what Anders says in that video, here are my reasons why TypeScript is amazing!

A Superset of JavaScript
TypeScript is a superset of JavaScript. This means that
introducing TypeScript to your current JavaScript code is

easyjust rename the .js file to .ts, and youre good to go.
Of course, by simply renaming a file you, dont get the full
power of TypeScript. But by simply renaming the file to .ts
and continuing to write JavaScript like youve always done,
you begin to start getting some of the advantages of Typescript. These advantages are, of course, dependent upon
the IDE you use, but for the purposes of this article, Ill talk
about Visual Studio 2015 or Visual Studio code.
For instance, consider the very simple add function, as
shown below,
function add(a, b) {
return a + b;
}

As you can tell, this is plain JavaScript. Because you renamed the file to .ts, the IDE begins to understand the
code better, and allows you to refactor the code easily.
Just right click on the add text, and youll see a menu,
as shown in Figure 1.
You can easily rename a variable or function name. You can
also peek or go to the definition. You can find all references. In other words, you have the power of a higher-level
language, such as C#. If this were plain JavaScript, youd
be stuck doing error-prone string replacement. Try out the
same code in a JavaScript file for a trip back to the 1990s.

Try out the same


code in a JavaScript
file for a trip back
to the 1990s.

Additionally, the IDE also warns you if youre not passing in the parameters properly. For instance, if your add
function expects two parameters but youre accidentally
passing in only one, you get a nice red squiggly line, as
shown in Figure 2.
Not only that, the IDE is able to judge the variable types, even
if you dont specify their data types. In doing so, itll then help
you avoid common mistakes that are otherwise syntactically
perfect JavaScript code. This can be seen in Figure 3.

Static Typing

22

Figure 1: The context menu for TypeScript

The lack of static typing is JavaScripts strength and its


weakness. It makes the language very flexible, but also

TypeScript: The Best Way to Write JavaScript

codemag.com

very error-prone. TypeScript introduces the concept of


static typing to JavaScript. Its as simple as this, when
you dont assign a variables data type by typing var variablename, TypeScript tries its best to infer its data type
using whatever the variable is assigned to. If its unable
to determine the data type, it assigns it the any data type.
You can go one step further, and assign specific data
types to your variables. The data types can be Boolean,
Number, String, Array, Enum, Void, or Any. You can also
assign an object type as a data type.
For instance, Ive slightly modified the add function as
below,

This way, you can use the features of ES6 today and write
reliable code right now.
There are many other examples. I suggest visiting http://
es6-features.org/ and checking out everything ES6 offers.
Many of those offerings are usable today in JavaScript. As
a simple example, let me pick just one of those features,
called Arrow Functions. Arrow Functions is a shorthand
for writing full functions, much like Lambda expressions
in C#.
For instance, examine the code below that is ES6 code,
but that can be written in a TypeScript file:
var odds = evens.map(v => v + 1);

function add(a:number, b:number) {


return a + b;
}

Now, if you accidentally try to add two stringswhich is


a very common mistake given that the concatenation operator works on both strings and numbersyoull get a
helpful error, as shown in Figure 4.
In fact, Visual Studio gives IntelliSense hints, as can be
seen in Figure 5.
For the most commonly used datatypes in HTML, TypeScript implements interfaces, allowing you to strongly
type the selected HTML element types, as can be seen
in Figure 6.
If youre using a non-HTML JavaScript engine, such as V8
running in NodeJS, you can add definition files to add
support for known types specific to a platform. More on
that later.
The advantage of being able to strongly type HTML elements (as shown in Figure 6), is that now you get rich
coding help when authoring TypeScript. For instance, the
HTMLCanvasElement is somewhat new, and I didnt realize that you could easily convert the canvas data into a
DataURL and stick on an image very easily. Fortunately,
TypeScript IntelliSense teaches you as you go along, as
shown in Figure 7.

Those of us familiar with Lambda Expressions immediately


understand what that code attempts to do. Heres the automatically generated JavaScript equivalent of the above
code:
var odds = evens.map(
function (v) {
return v + 1; });

If you remember from my WTFjs article, I mentioned that


the this keyword is extremely confusing. Thats because
the value of this changes based on context. The good
news is that lambda expressions, or Arrow Functions, automatically assign the right value to this so you dont
have to save this into that.
I could spend all day talking about ES6 and how its going to solve all of the challenging issues that humanity

Figure 2: Intelligent code hints

Features from the Future


JavaScript sucks, but its is getting better. Theres ES6,
which attempts to solve many of the issues I outlined in
my WTF.js article last time in CODE Magazine. The problem
is that the browser where your code runs may not support
ES6, or even worse, not all of ES6 yet.
TypeScript to the rescue! You can write your code using
everything ES6 offers, and even some of what ES7 will offer at some point in the future. The TypeScript transpiler
converts your code to ES5 or even ES3 if you ask it to.

Figure 3: Data type error checking

Figure 4: Static typing helps at coding time

codemag.com

TypeScript: The Best Way to Write JavaScript

23

faces today. Each example is better than the next. Ill let
you try them in ES6 in your own time. Before I leave this
topic, though, let me show one other example.

I encourage you to type out ES6 in TypeScript. Youll see


that the majority of them are usable today. Even some
ES7 features are supported today.

In my WTF.js article, I explained why globals are evil, and


how JavaScript lets you create accidental globals super
easily. Even when you do remember to put in the var keyword, you still have limited control over where the variable
is scoped. Loops are especially problematic. I wish there
were a block scope restricted by curly brace symbols {} in
JavaScript, like in almost any other similar language.

Compatibility with Browsers

ES6 introduces a keyword called let that does exactly


that. For instance, consider the code below:
var i = 10;
for (let i = 1; i < 10; i++) {
// do something useful
}

The above is ES6 code, but the let keyword ensures that
the inner i variable in the for loop doesnt interfere with
the same-named variable outside the loop. TypeScript allows you to write exactly the same code, and run it and
take advantage of it in browsers that dont support ES6
yet. TypeScript converts the above code into the following:
var i = 10;
for (var i_1 = 1; i_1 < 10; i_1++) {
}

I hate browser proliferation. Yes, I know: If there were no


Firefox, wed still be using IE6. But proliferation makes
my job as a Web developer so much harder. I hate browser
differences so much! So many times, Ive written code and
tested it in IE, only to find that some smart guy tried it in
Firefox. When I fix it in Firefox, it breaks in Chrome. I fix it
in chrome and it breaks in IE. Give me a break!
TypeScript helps fix that problem too. You just write in
TypeScript, the one language common across all browsers. You ask the TypeScript compiler to convert it to ES3,
ES5, or the most commonly used option, commonJS. This
always ensures that the correct cross-browser compatible
JavaScript is emitted, no matter what you write on the
other side.
Its so good to focus on a single language set finally! It
makes me productive, and I think Ill stick with it.

IntelliSense
IntelliSense is a big deal. Ive touched upon this in the Static Typing section above. It gets even better! You could write
interfaces or classes, for instance, as shown here:
interface animal {
animalName: string;
}

Figure 5: IntelliSense hints tell us about data types.

Once youve declared interfaces and classes, and have


strongly typed your variables properly, you start getting

Figure 6: Support is built in for important HTML data types.

Figure 7: TypeScript IntelliSense is full of help.

24

TypeScript: The Best Way to Write JavaScript

codemag.com

rich intelliSense, making your code a lot more meaningful, as can be seen in Figure 8.
You can see how powerful this can be, especially when
authoring large libraries and complex interdependent JavaScript/TypeScript code. It gets even better than that.
Much better!
These days, you almost never write JavaScript from
scratch. You always stand on someone elses shoulders
by using libraries and frameworks, such as jQuery and
AngularJS. For nearly every library you care about, the
community has written .d.ts or definitely typed files, that
give you IntelliSense for those third-party libraries.
For instance, for AngularJS, Id simply add a line at the
top of my code as shown here:

Figure 8: TypeScript intelliSense is full of help.

Figure 9: TypeScript also works in Visual Studio Code.


higher-level languages, these will be second nature to you.
Youll find writing JavaScript just as much fun as writing C#.

/// <reference path="/dts/angularjs/angular.d.ts" />

By doing so, I get rich IntelliSense for all of AngularJS.


For instance, as can be seen in Figure 9, I get to know
what interface type angular is the variable of. If you
were wondering why Figure 9 looks different, its because
its from Visual Studio Code, the freebie cross-platform
code editor that Microsoft introduced recently.
Another such example is the forEach method. Itd be nice
to be able to iterate over a JavaScript array. AngularJS
thinks so, jQuery agrees, and so does ES6. Except that the
original spec of JavaScript didnt think iterating over an
array would be that important. Heh.
Both jQuery and Angular give us a forEach method. Unfortunately, their parameter orders are reversed. This
alone causes a lot of confusion and errors because frequently, you end up using jQuery and Angular together.
Fortunately, TypeScript tells you exactly what to expect,
as can be seen in Figure 10.
There are many other such helpful hints across the IDE such
as hints, tool tips, IntelliSense etc. As someone used to

Rich Community Support


The question, of course, is, where do you get the .d.ts
files for your project? The answer is that for code you
write, the TypeScript compiler generates the d.ts files for
you. For the libraries you use, you can simply download
the d.ts files from www.definitelytyped.com.
All of those files that you can download for absolutely
free are quite good. Ill never understand how open
source is so good, but Ill take it. For instance, I was writing AngularJS code and wanted to write a constant on my
module when I got the very helpful IntelliSense help text
seen in Figure 11.
Yes, before you ask, it works on a Mac too. In fact, the
Visual Studio code IntelliSense tooltip is slightly better
formatted. Its not bad for a beta product.
One thing I must caution you about, however, is that
TypeScript and the IDEs that make use of it dont understand the difference between standard third-party libraries and your code. For instance, youre not supposed to
rename the module.constant in Figure 11 to module.

Figure 10: TypeScript IntelliSense tells us which parameter does what!

Figure 11: Help for most common libraries

codemag.com

TypeScript: The Best Way to Write JavaScript

25

Figure 12: The IDE catches errors for you.


donkeykong. That constant function is declared in a
standard third-party library called AngularJS.
Both Visual Studio and Visual Studio code let you rename
it without any warning bells or sirens going off. Ouch!

SPONSORED SIDEBAR:
HTML Mentoring
Need help with an HTML
project? CODE Consultings
special introductory offer
gives you full access to a
CODE team member for 20
hours to receive assistance
for your project needs.
Mentoring and project
roadblock resolutions with
CODEs guidance are a perfect
partnership. And now, CODE
Consultings Introductory
Packages are only $1,500thats
half price! Email info@codemag.
com to set up your time with a
CODE specialist, today!

I wish that the IDEs supported concepts of blackboxing


framework files. Until that happens, heres a best practice
Id like to share with you. The d.ts files should reside in
a folder and simply mark that folder as read only. Then,
youll never accidentally change them. In your master git
repo or SVN folder, ensure that most developers dont
have write-access to that particular folder. This way, with
the combination of TypeScript and source control, youll
never accidentally step on this landmine.

Object-Oriented Features
Okay, raiseYourHand(); if you: ILikeObjectOrientedCode. I thought so!
JavaScript isnt object oriented. Its weird-oriented. It
likes prototypes, which are sort of like object-oriented
and inheritance, but not quite. In fact, prototypal inheritance is confusing. If youre a top notch OO developer,
prototypal inheritance actually makes it difficult for you
to understand whats going in.
TypeScript solves that problem too. It brings concepts
such as inheritance, interfaces, classes, etc. into JavaScript in the manner in which youd expect them to behave. For instance, consider this code:
interface foodItem {
foodName: string;
calories?: number;
}

It simply declares an interface, which says that the object implementing this interface must have a required
property of type string called foodName, and an optional
number property called calories.

tion. This means that every bit of code relying on the


foodItem data type can be guaranteed not to be null or
have undefined checks on foodName. This alone is a huge
win!
You know this from C#. Interfaces let you establish consistency. But JavaScript is a lot more powerful than C#.
In JavaScript, every function is an object, and both can
be thought of and used interchangeably.

JavaScript isnt object-oriented.


Its weird-oriented.

If you remember from my WTF.js article, I mentioned how


much JavaScript arrays suck! TypeScript solves almost
every issue with JavaScript arrays. One of those issues,
for instance, is that you can stuff anything into an array.
Theres no way to establish consistency amongst the various objects inside an array.
TypeScript allows you to establish that consistency with
just one line of code, as shown here:
interface foodItemsArray extends Array<foodItem>
{
[index: number]: foodItem;
}

Now, if some developer tries to push a non-food item into


this array, the IDE catches that error for you. You can see
this in Figure 12.
Im using an advanced OO concept called Generics here.
Thats another side-benefit of TypeScript.
As I mentioned, JavaScript treats functions as objects,
which means that you can do things that you cant do
in C#, such as declaring interfaces that define function
signatures. For instance, consider the code here:

This means that I can start writing code like this:


var newFoodItem: foodItem = {
foodName: "celery"
}

TypeScript ensures that every foodItem at least has a


name. And it allows you not to specify calories informa-

26

TypeScript: The Best Way to Write JavaScript

interface newFoodItem {
(foodName: string, calories: number): foodItem;
}

This is an interface that defines a function signature and a


return value type. A possible implementation of this function is shown in Listing 1.

codemag.com

There are a number of interesting concepts in play in Listing 1. Ive defined a function that implements an interface.
Additionally, the array is guaranteed to have only variables
matching a certain predefined data type. This means that you
dont have to do onerous typechecking or null and undefined
checks. You could, of course, take this further and create a
single interface thats both a function and has properties
of the function. Just imagine how difficult or error prone it
would be to achieve the same in plain old JavaScript!

deep by creating a class called WildAnimal that inherited


from Animal, in TypeScript, it would look like this:

Object-oriented concepts dont stop at interfaces. There are


also classes that have private variables and classes that inherit from each other. There are also static properties and
constructors. Sometimes these constructors can also have
optional parameters, effectively giving you overloaded constructors.

Inheritance and OO concepts are very important. When


writing relatively complex code, its a tool that I find essential. TypeScript puts all this power within my reach.

Now pause and think for a moment: How would you implement all that stuff in the preceding paragraph in plain JavaScript? Im serious, stop reading, and think through how you
would implement this in JavaScript. Its complicated, isnt
it? Im sure youre thinking in terms of prototypes, returning
various properties of prototypes, all cleverly placed to mimic
class properties. Youre probably also thinking of how you
would implement static variables. Its doable, but not easy!

class WildAnimal extends Animal {


habitat:string
}

The JavaScript implementation, on the other hand, is


shown in Listing 3.

Modules
Modularizing code is yet another important code organization tool I like to use. Modularization is important
because it lets you split your code into multiple files, and

In TypeScript, you simply write it as shown in Listing 2.


Inheritance as done in TypeScript is a lot easier to understand than as done in JavaScript. It only gets worse if
you inherit. For instance, if I were to inherit just one level

Listing 1: A function that implements an interface


var addFoodItem: newFoodItem =
function (foodName: string, calories: number) {
var newFoodItem: foodItem = {
foodName: foodName,
calories: calories
};
foodItems.push(newFoodItem);
return newFoodItem;
}

Listing 2: Implementing a class with constructurs and statics


class Animal {
static standardSound = "Woof";
animalType: string;
newSound = "";
constructor(animalType?: string, newSound?: string) {
this.animalType = animalType;
this.newSound = newSound;
}
makeSound() {
if (this.animalType == "Dog") {
return Animal.standardSound;
}

else {
return this.newSound;
}
}
}
var dog: Animal = new Animal();
dog.makeSound();
// Woof
var cat: Animal = new Animal("cat", "meeow");
cat.makeSound();
// meeow

Listing 3: A complicated and error prone JavaScript implementation


var __extends = (this && this.__extends) || function (d, b) {
for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
function __() { this.constructor = d; }
__.prototype = b.prototype;
d.prototype = new __();
};
var Animal = (function () {
function Animal(animalType, newSound) {
this.newSound = "";
this.animalType = animalType;
this.newSound = newSound;
}
Animal.prototype.makeSound = function () {
if (this.animalType == "Dog") {
return Animal.standardSound;
}
else {

codemag.com

return this.newSound;
}
};
Animal.standardSound = "Woof";
return Animal;
})();
var WildAnimal = (function (_super) {
__extends(WildAnimal, _super);
function WildAnimal() {
_super.apply(this, arguments);
}
return WildAnimal;
})(Animal);
var dog = new Animal();
dog.makeSound(); // Woof
var cat = new Animal("cat", "meeow");
cat.makeSound(); // meeow

TypeScript: The Best Way to Write JavaScript

27

Listing 4: Splitting your code in modules


module Greetings {
export interface Greet {
sayHello(person: string): string;
}
}
module FormalGreetings {
export class SayFormalGreeting implements Greet {
sayHello(person: string) {
return "Good Morning : " + person;
}
}
}

What if I wanted to pass in three numbers instead of two?


What if the third parameter had a default value, for when
I forgot to supply a value on call?
What if there were an unknown number of parameters?
Although all of the above are achievable in JavaScript,
you have to write a lot of code to achieve all of them without any errors. TypeScript makes all of this a lot easier.
For instance, consider the code shown in Listing 5.

var formalGreeting: SayFormalGreeting = {};


formalGreeting.sayHello("Sahil");

This same function can now be called in multiple different


ways, as shown here:

Listing 5: A better and more flexible add method

add(1, 2);
// returns 3
add(1, 2, 3);
// returns 6
add(1, 2, 3, 4, 5); // returns 16

function add(a: number, b: number, c: number = 12,


...restOfTheNumbers: number[]) {
return a + b + c;
// write logic to add restOfTheNumbers
}

each file can expose its interaction points while keeping


the internal sausage factory secret.
For instance, consider the code shown in Listing 4.
As you can see, by splitting the code into Modules, Ive
been able to hide certain parts of my implementation
that the usage of my code doesnt need to bother with.
Whats even cooler is that I could split these modules into
multiple files and simply reference them using the ///
<reference> tag. This way, I avoid the danger of other developers peering into my sausage factory or even worse,
changing my internal implementation, thereby breaking
everything that depends on my class. Also, consider that
changing the internal implementation can be done completely at runtime. You dont need to actually edit the
file; you just need to change the reference of property to
completely redefine its meaning. This technique is called
monkey patching. Monkey patching, just like many other things in JavaScript, is both the strength and weakness
of JavaScript. Its too much power, much like handing an
AK47 to a monkey.
TypeScript, in addition, also allows you to load these dependency modules using the syntax shown here:
import greetingsModule = require('Greetings');

You can therefore conditionally load modules by surrounding this import statement inside if blocks. You cant
do that in compiled languages.

Function Parameters
Ive touched upon the power that TypeScript brings to
function parameters earlier in this article but it deserves
its own section. Earlier in this article, I mentioned a simple add function that adds two numbers. Id mentioned it
because theyre statically typed: you cant pass in strings.

28

What if I did want to pass in strings, and get a concatenated string back?

TypeScript: The Best Way to Write JavaScript

Summary
JavaScript sucksI think Ive made that abundantly clear.
Also, you cant avoid JavaScriptI think Ive made that
abundantly clear as well. Writing complex and reliable JavaScript is possible. In this article, I showed that the first
step to this promised land is by using TypeScript.
As amazing as TypeScript is, theres one big disadvantage:
It needs to be transpiled. In other words, the code you
write isnt the code that runs. You write some code, it
gets converted into JavaScript, and that generated JavaScript runs in the browser. This introduces some friction, especially while debugging. It requires you to set
up your development environment considering how and
when the TypeScript code gets transpiled into JavaScript.
Certainly, you dont want this transpilation to happen on
the fly in production. These transpiled files are built and
shipped ahead of time. This makes TypeScript perhaps not
as flexible as JavaScript. Not to mention that hitting a
breakpoint in TypeScript requires complex workarounds
such as sourcemaps. And lets be honest, not all browsers
understand all of that.
Still, given the huge advantages that TypeScript brings,
this is a minor downside that I would gladly overlook. In
fact, Id go as far as saying: TypeScript is the best way of
writing JavaScript today.
Also, because TypeScript is a superset of JavaScript,
theres really no reason not to write TypeScript. In my
next article, Ill talk about another fundamental building
block of good JavaScript, which is MVC, Angular, TDD etc.
TypeScript is the best way to write Angular 2 code. In fact,
the alternative of writing it in plain JavaScript is so bad,
Id argue that its the only way to write Angular 2 code.
That and more in my next article. Until then, happy coding!



Sahil Malik

codemag.com

codemag.com

Title article

29

ONLINE QUICK ID 1511061

The Bakers Dozen: A 13-Question


Pop Quiz of SQL Server Items
Ive been on both sides of the interview table. Ive interviewed other developers, either as a hiring manager or as part of
an interview team. Ive also sweated through interviews as the applicant. Ive written Microsoft certification questions,
interview test questions, and have taken plenty of tests. Once I even pointed out where the answer sheet for a companys test
questions was incorrect. So Im going to borrow from
those experiences in this installment of Bakers Dozen, by
giving readers a chance to take a test on SQL Server and
Transact-SQL topics.

A New Era of Bakers Dozen Articles


Kevin S. Goff
kgoff@kevinsgoff.net
http://www.KevinSGoff.net
@KevinSGoff
Kevin S. Goff is a Microsoft SQL
Server MVP. He is a database
architect/developer/speaker/
author, and has been writing
for CODE Magazine since 2004.
Hes a frequent speaker at
community events in the midAtlantic region and also speaks
regularly for the VS Live/Live
360 Conference brand. He creates custom webcasts on SQL/
BI topics on his website.

After roughly ten years and roughly 50 Bakers Dozen articles, Ive decided that its time to shake it up a little.
For now and (at least) the immediate future, Im going
to present 13 test-style questions (or scenarios) on database topics, and see how readers do with the questions.
Even if youre not specifically looking for ideas for test
questions, the example questions might help you with
various database programming tasks.

Whats on the Menu?


This section lists the 13 test question topics in this article.
With the exception of the last item, Id categorize these
questions as for beginning to intermediate developers.
These are the types of interview questions Id expect a midlevel database developer to be able to answer. In the first
half of this article, Ill cover the test questions and in the
second half, Ill cover the answers. Although this magazine
isnt an interactive software product that requires you to answer before seeing the results, youll at least have a chance
to read the questions and try to answer unaided. That is, of
course, if you can avoid peeking into the second half of the
article to see the answers!
Several of the questions are multiple choice questions, where
responses are either 100% correct or 100% incorrect. In a few
cases (such as the topics on NOT IN vs NOT EXISTS and the
MERGE statement), theyre meant to trigger discussions exploring the knowledge and thought process in the response.
Heres the list:
1. Scalar variables
2. Comma-separated values from multiple rows subquery
3. Date Arithmetic with Datetime and Date Data Types
4. Subquery Syntax and results
5. SCOPE_IDENTITY and @@IDENTITY
6. Features of SQL Server Views
7. The BETWEEN statement
8. IDENTITY values and uniqueness
9. Basic Syntax with VARCHAR
10. NOT IN vs NOT EXISTS and Performance
11. The REPLACE function
12. Rules on indexes
13. Bakers Dozen Spotlight: Advanced use of MERGE

30

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

1: Scalar Variables

Suppose you create a SQL Server table with the following


DDL (data definition language):
CREATE TABLE dbo.ControlTable
(ControlID int identity,
CompanyName varchar(100))
GO

Now suppose someone inserts a single row into the table.


Someone else intends to update the row to adjust the company name, but accidently winds up inserting a new second
row instead of updating the first row. Now youll have two
rows in the table, when you really wanted only one.
INSERT INTO dbo.ControlTable
(CompanyName)
VALUES ('Company ABC')
-- Someone needed to issue a correction
-- but accidently wound up inserting a new row
INSERT INTO dbo.ControlTable
(CompanyName)
VALUES ('Company ABC, INC')

Now suppose that you want to populate a scalar variable


for the company name. You write two simple queries that
read from the table and populate a scalar variable.
Example A
DECLARE @CompanyNameString varchar(100)
SELECT @CompanyNameString =
CompanyName FROM ControlTable
SELECT @CompanyNameString

Example B
DECLARE @CompanyNameString varchar(100)
SET @CompanyNameString =
(SELECT CompanyName FROM ControlTable )
SELECT @CompanyNameString

What are the results?


a. Both Examples A and B generate an error because
both examples are trying to read multiple rows into
a single scalar variable.

codemag.com

b. Example A generates an error message, and Example


B returns one of the two CompanyName values.
c. Example A returns one of the two CompanyName values, and Example B generates an error message.
d. SQL Server converts the variables in both examples to
table variables and returns both CompanyName values.

2: Comma-Separated Values from Multiple Rows

Figure 1: The result set you want to generate

Suppose you create a SQL Server table with the following


DDL, and then insert the following rows:

some test data. The data stores two vendors in the master table and two orders in the order table (one for each vendor).

The Proverbial Broken


Record

CREATE TABLE dbo.TestPurchases


( PurchaseID int identity,
Purchaser varchar(100),
PurchaseAmount money,
PurchaseReason varchar(100))
GO

CREATE TABLE dbo.VendorMaster


(VendorPK int, VendorName varchar(100))

Ill say it every day until Ive


spoken my last breath: T-SQL skills
and database skills are absolutely
essential for application
developers.

INSERT INTO dbo.TestPurchases VALUES


('Kevin',50, 'SQL Book'),
('Kevin', 60, 'Steak dinner'),
('Steve', 55, '.NET Book'),
('Steve', 45, 'Music CDs'),
('Steve', 35, 'Lunch')

INSERT INTO dbo.VendorMaster


VALUES (1, 'Vendor A'),
(2, 'Vendor B')

You want to build the result set shown in Figure 1: one


row per Purchaser with the PurchaseReason column concatenated.

CREATE TABLE dbo.OrderTable


(OrderID int, VendorFK int,
OrderDate Date, OrderAmount money)

INSERT INTO dbo.OrderTable VALUES


(1, 1, cast('1-1-2008' as Date), 100),
(2, 2, cast('1-1-2009' as Date), 50)

You want to write a query that returns the vendors that


have at least one order in the year 2008. How many rows
will SQL Server return for the following query?

How can you accomplish this? (Select only one answer.)


a. Use the T-SQL FOR XML PATH ().
b. Use the T-SQL STRINGCONCAT function.
c. Use the SQL Server CLR and a .NET function.
d. T-SQL cannot accomplish this. Youd need to use a
reporting tool.

3: Date Arithmetic with DateTime and Date Data Types

Suppose you have two variables: a DateTime variable and


a Date variable. You try to add a value of (1) day to both
variables.
DECLARE @TestValueDateTime DATETIME = GETDATE()
DECLARE @TestValueDate DATE = GETDATE()
SELECT @TestValueDateTime + 1 -- Using DateTime
SELECT @TestValueDate + 1 -- Using Date

By default in SQL Server, what does SQL Server generate


for the two SELECT statements after the code executes?
(Select only one answer.)

SELECT * FROM dbo.VendorMaster


WHERE VendorPK IN
(SELECT VendorPK FROM dbo.OrderTable
WHERE YEAR(OrderDate) = 2008)

By default in SQL Server, what does SQL Server store for the two
variables after the code executes? (Select only one answer.)
a. One row
b. Two rows
c. Zero rows
d. The query generates an error message.

5: SCOPE_IDENTITY() and @@IDENTITY

Suppose you create the following test table, which contains an identity column that SQL Server generates. You
insert a row into the table.
CREATE TABLE dbo.TestTable1
(IdentityKey int identity, Name varchar(100))
INSERT INTO dbo.TestTable1 values ('Kevin')

a. Both SELECT statements generate an error message.


b. The first SELECT statement adds a single minute to
the value of the DateTime variable, and the second
SELECT statement adds a day to the value of the Date
variable.
c. Both SELECT statements add a single day to whatever
was in each of the two variables.
d. The first SELECT statement adds a day to whatever
was in the DateTime variable, and the second SELECT
statement generates an error.

4: Subquery Syntax and Results

Suppose you create a basic Vendor Master table and a related Order table, with orders for vendors. Heres the DDL and

codemag.com

Youd like to know and capture what SQL Server generated as an identity value for the INSERT, regardless of
any other processes. You use both the SCOPE_IDENTITY()
function and the @@IDENTITY system global variable.
SELECT SCOPE_IDENTITY()
SELECT @@IDENTITY

Both SELECT statements return a value of 1.


Can you name one scenario where SCOPE_IDENTITY()
and @@IDENTITY dont return the same values? In other
words, describe a situation where one statement returns

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

31

select * from NameTest


WHERE LastName BETWEEN 'B' AND 'C'

How many rows does this query return? (Select only one
answer.)
a. Four rows
b. Six rows
c. Seven rows
d. Zero rows
e. The query generates an error message.
Figure 2: The first eight rows of the Purchasing.Vendor table
the desired value of 1 and the other returns something
other than a value of 1.

6: Features of SQL Server Views

Suppose you create a table with the following DDL structure, including an identity column:
CREATE TABLE dbo.TestIdentity
(IdentityValue int identity,
IdentityName varchar(100))

Suppose someone creates a view against the Microsoft


AdventureWorksDW database. You can assume that the tables FactInternetSales and DimDate exist in the database.
You can also assume referential integrity between the two
tables on the DateKey/ShipDateKey JOIN condition and
that the column references are correct.

Now suppose you insert two rows into the table. If you
query the table after executing the inserts, you clearly
see the values of 1 and 2 for the identity values.

CREATE VIEW dbo.vwSalesByYear


@YearToRun int
as

SELECT * FROM dbo.TestIdentity

SELECT FactInternetSales.ProductKey,
SUM(FactInternetSales.SalesAmount) AS AnnualSales
FROM FactInternetSales
JOIN DimDate on ShipDateKey = DateKey
WHERE DimDate.CalendarYear = @YearToRun
GROUP BY FactInternetSales.ProductKey
ORDER BY FactInternetSales.ProductKey

Assuming that all column and table name references are


correct, how many problems do you see with this SQL
Server view? (Select only one answer.)
a. The view has one issue that generates an error.
b. The view has two issues that generate errors.
c. The question does not provide enough information.
d. The view has no issues and will execute properly.

7: The BETWEEN Statement

Suppose you create a test table of LastName values. You


insert nine rows into the test table, and then create an
index on the LastName column.

INSERT INTO dbo.TestIdentity (IdentityName)


VALUES ('Kevin'), ('Steven')

Yes or No, will the identity designation in this situation


guarantee uniqueness for the IdentityValue column in
this table?

9: Basic Syntax with VARCHAR

Suppose you query the Purchasing.Vendor table in AdventureWorks. The first eight rows (in Name order) look like
Figure 2:
Suppose you want to retrieve all rows where the vendor
name begins with the word American. You execute the
following lines of code that uses the LIKE statement and
the wild card character (%) at the end of the lookup value.
DECLARE @TestValue varchar
SET @TestValue = 'American%'
select * from Purchasing.Vendor
where name like @TestValue

How many rows will the query return?

10: NOT IN vs NOT EXISTS and Performance

CREATE TABLE dbo.NameTest


( LastName varchar(50))

For years, developers and DBAs have debated the use of


IN vs EXISTS, as well as NOT IN vs NOT EXISTS. This question looks at the performance differences between the
two competing language statements.

INSERT INTO dbo.NameTest


VALUES ('Anderson'), ('Bailey'),
('Barry'), ('Benton'),
('Boyle'), ('Canton'),
('Celkins'), ('Cotton'),
('Davidson')

Suppose youre working with a product dimension table


of a few thousand products, and a large data warehouse
fact table of sales transactions. Assume that the fact table
could be hundreds of millions of rows.

CREATE INDEX LastName ON dbo.NameTest (LastName)

You execute the following query:

32

8: IDENTITY Values and Uniqueness

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

You want to retrieve the products that dont exist in the fact
table. Assume that the Product Key in the fact table permits
null values. You write two different queries: one uses NOT
EXISTS and the other uses NOT IN. Which of the two queries

codemag.com

would you expect to perform better? Also, would you use


one approach as a general practice over the other?
select * from DimProduct
where NOT EXISTS(
select * from FactOnlineSales
WHERE FactOnlineSales.ProductKey =
DimProduct.ProductKey )
select * from DimProduct
where ProductKey not
in (select ProductKey
from FactOnlineSales )

11: Use of REPLACE

Suppose youre trying to clean up a client database where


users have spelled out the address notation PO Box in
many different ways. Youd like to standardize all of the
following possible spellings into a common strong, as
shown in Figure 3:
You know that you can use REPLACE to substitute a partial substring, as follows:

teric example, although the situation is a fairly common


challenge in data warehousing environments. I ran into
this situation in the field about two years ago. Theres
some important background to cover.
Suppose you have a product master table in an OLTP
(transaction) system, as well as a Product dimension in a
data warehousing environment.
CREATE TABLE dbo.OLTPProduct
( ProductSKU varchar(25),
ProductName varchar(100),
ProductPrice decimal(14,4),
LastUpdate DateTime )

CREATE TABLE dbo.DimProduct


( ProductPK int identity,
ProductSKU varchar(25),
ProductName varchar(50),
ProductPrice decimal(14,4),
CurrentFlag bit,
EffectiveDate Datetime,
ExpirationDate datetime)

REPLACE (AddressLine, 'P.O. Box', 'PO Box')

Does that mean youd need to write a separate REPLACE


function for every possible combination? That could mean
a very long SQL statement. Is there a better way? Could
you tell the REPLACE function to read from a table, or
must you write a separate REPLACE statement for each
possible combination of values?

12: Rules on Indexes

Select all statements that are true about database indexes in SQL Server. (There can be more than one correct
statement.)
a. A SQL Server table can have many non-clustered
indexes, but cannot have more than one clustered
index.
b. A SQL Server table can have many clustered indexes,
but cannot have more than one non-clustered index.
c. The only way to establish uniqueness in a SQL Server
table for a specific column is to define the column as
a Primary Key.
d. A column for a clustered index can be a different column than the column you define as the Primary Key.
e. Non-clustered indexes can also include non-key columns to optimize queries that frequently retrieve
those columns in addition to the key columns.

13: Bakers Dozen Spotlight: Advanced Use of MERGE

Ill close the round of questions with a more advanced


question. This scenario might initially seem like an eso-

Take special note that the Product dimension table contains columns for an effective date and an expiration date.
The requirement here is to preserve historical changes in
the table, for historical reporting purposes. For instance,
a product might sell at $49.95 for a year, and then might
see a price increase the following year. Analysts might
want to report on sales under the old price as well as
the new one. In the data warehouse, you store each version as a separate row with a separate surrogate key. You
use the surrogate key in effect at the time of the sales
transaction when you post sales to the fact table. This is
known in the data warehousing world as a Type 2 Slowly
Changing Dimension. (In a Type 1 scenario, you dont
care about preserving history).
OK, now populate some data. Start with an INSERT of one
product, when the company first introduces the product.
INSERT INTO dbo.OLTPproduct
VALUES ('ABCDE', 'USB 64 GB Thumb Drive',
49.95, CAST( '7-1-2015' AS DATE))

After an overnight ETL (Extract, Transform, and Load)


process to populate the DimProduct dimension, youd expect both tables to look like Figure 4.
When you update the products price, you want to retire
the old version of the row in the product dimension, and
then insert a new row into the product dimension with
the new price.

Figure 3: Examples of some of


the possible values youd like to
standardize

Figure 4: The contents of dbo.OLTPProduct and dbo.DimProduct after one insert

codemag.com

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

33

Figure 5: The content for dbo.OLTPProduct and dbo.DimProduct after an update


UPDATE oltpproduct
SET ProductPrice = 54.95,
LASTUPDATE = GETDATE()
WHERE ProductSKU = 'ABCDE'

After another ETL process to update the data warehouse,


youd expect both tables to look like the results in Figure
5. Again, take special note that you dont want to overwrite
the version of the product in the data warehouse product
dimension. Instead, you want to retire the old row, and
insert a new row (a new version of the ProductSKU) with a
new price and a new effective date. Also note that the product dimension contains a flag for which version is current.
With the two versions of the row, any historical transaction
fact tables with sales can point to the ProductPK of 1 for
sales that occurred when the product sold at 49.95, and can
point to the ProductPK of 2 for sales that occurred under the
newer price. Again, this is classic fundamental Type 2 Slowly
Changing Dimensions in a data warehouse environment.
OK, that was a significant amount of background. Now you
get to the question. In T-SQL 2008, Microsoft introduced the
MERGE statement, which allows developers to INSERT and
UPDATE in one statement. MERGE initially seems like a good
candidate to handle this situation of inserting a new row into
the product dimension in response to an OLTP product insert,
and then performing an update followed by a delete on an
OLTP product update. However, theres a problem.
MERGE (targettable) Targ
USING (SourceTable) Src ON Targ.Key = Src.Key
WHEN NOT MATCHED then Insert
WHEN MATCHED - can we update and then insert?

Youd like to use the When Matched clause when you encounter a row in the source that matches a row in the target (based on the ProductKey), and then perform both an
update and an insert to the product dimension. However,
the MERGE statement doesnt allow you to perform both
an UPDATE of the old row (to retire the old version) and
an INSERT of the new version of the product row (with the
new price). SQL Server only allows one DML statement for
the WHEN MATCHED condition.
So heres the question: Is there some way you can still use
the MERGE statement to retire the old version of the row
AND insert a new version of the row when you detect a price
change? Or do you need to look at another approach?

The Answer Sheet


For the Scalar Variables question (question 1), the answer
is C. (Example A returns one of the two CompanyName

34

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

values, and Example B generates an error message.) The


first query returns one of the two company values, and the
second generates an error message. Although its true that
SQL Server scalar variables can only hold one value at any
one time, SQL Servers specific behavior depends on how
youre populating the variable. In the case of example A, a
SELECT statement that populates a variable returns the last
value from the rows that the query scans. In the case of example B, a SET statement generates an error if SQL Server
attempts to read more than one row. This makes example A
riskier, as the lack of an error message masks the problem:
You might never realize that the code should have generated an error because of the multiple values. So the SET
syntax is recommended over the SELECT syntax.
For Comma-Separated Values for Multiple Rows question
(question 2), the answer is A (use FOR XML). Developers
who werent already aware of this feature (or didnt realize that FOR XML can generate concatenated lists) should
remember that FOR XML is an extremely versatile function!
You can combine FOR XML with the STUFF function.
SELECT Purchaser,
SUM(PurchaseAmount) AS TotPurchase,
( STUFF (( SELECT ', ' + PurchaseReason
FROM dbo.TestPurchases inside
where inside.Purchaser =
Outside.Purchaser
FOR XML PATH ('')), 1,1,''))
AS Purchases
FROM dbo.TestPurchases Outside
GROUP BY Purchaser

If youre curious about the sequence of events, the FOR XML


PATH () generates the most basic of XML strings. The comma at the beginning of the SELECT places a comma in front of
every new value in the rows that the query reads. The STUFF
statement at the end replaces the first character in the result
set (the opening comma) with an empty string.
For the Date Arithmetic with DateTime and Date Data
Types question (question 3), the answer is D (The first
SELECT statement adds a day to whatever was in the DateTime variable, and the second SELECT statement generates an error). The first SELECT statement adds a day to
whatever was in the DateTime variable, and the second
SELECT statement generates the following error: Operand type clash: date is incompatible with int.
SQL Server has always permitted developers to add days to a
DateTime data type simply by using the arithmetic operator
(+), even though most SQL professionals have always recom-

codemag.com

mended the DateAdd function over the arithmetic operator.


When Microsoft introduced the Date data type in SQL Server
2008, they enforced the use of DateAdd by generating errors
if developers tried to add days to a Date type by using the
plus sign. For companies that upgraded to SQL Server 2008,
this created a problem for those who changed DateTime
data types to Date data types if they also had production
code that used the plus sign for date arithmetic!
DateAdd was always a much better option, as developers had
the ability to specific the date part (e.g., wanted to add one
month to a date, wanted to add three days to a date, etc.).

When Microsoft introduced


the Date data type in SQL
Server 2008, they enforced the
use of DateAdd by generating
errors if developers tried to
add days to a Date type by
using the arithmetic plus sign.

For the Subquery Syntax and Results question (question 4),


the answer is B (two rows). Yes, you read that correctly;
SQL Server returns two rows. If you thought the query would
generate an error, Ill give half credit! Yes, the inside subquery refers to a foreign key by the wrong column name. If
you simply execute the inside subquery, then yes, youll generate an error. However, when SQL Server executes the entire
query, SQL Server (for all intents and purposes) relaxes the
rules for resolving references to the point where the database
engine returns ALL rows to the left of the IN statement in the
outer query. This can be very dangerous! The moral of the
story here is to always make sure that you test the subquery
portion of a derived table subquery. A second moral of the
story is to use EXISTS instead of IN, which Ill discuss later.
The SCOPE_IDENTITY() and @@IDENTITY question (question 5) focuses on a topic that some developers (even experienced ones) arent clear about. As a rule, if you want to
know the single identity value that SQL Server assigns on
a single insert, you should always use SCOPE_IDENTITY().
That raises the question: Whats the difference between
SCOPE_IDENTITY() and @@IDENTITY, especially since
both features return a value of 1 in this situation?
Heres the official answer: SCOPE_IDENTITY() returns the
last identity value that your INSERT created inside the current scope, whereas @@IDENTITY returns the last identity
value that SQL Server generated as a result of your INSERT.
Whats the difference? Theres a great one-word answer: triggers! Imagine a trigger that SQL Server fires as a result of
your INSERT, and the trigger itself inserts a row to a different
table that also had an identity column. The @@IDENTITY
returns the identity that SQL Server generated in the trigger,
whereas SCOPE_IDENTITY() continues to return the identity
value from the table associated with the original INSERT
statement. If you always want to retrieve the single identity
value that SQL Server assigns for the INSERT statement you
just executed, always use SCOPE_IDENTITY().
Notice that Ive stressed the term single identity value.
Suppose you fire an INSERT statement into a SQL Server

codemag.com

table with an identity column by piping in a SELECT statement that returns multiple rows? As you saw in the Scalar
Variables question, you cant redirect multiple rows into
a single scalar value, and both SCOPE_IDENTITY() and
@@IDENTITY return single scalar values. So how would
you determine all the identity values in that situation?
Fortunately, Microsoft addressed this situation back in
SQL Server 2005 with the OUTPUT statement. You can
capture/redirect all of the identity values that SQL Server
generates on multi-row INSERT (or for that matter, any
and all columns from the inserted table) by using OUTPUT
as follows:
CREATE TABLE dbo.IdentityTest
(IdentityValue int identity,
Name varchar(100))
GO
INSERT INTO dbo.IdentityTest (Name)
OUTPUT Inserted.IdentityValue
VALUES ('Kevin

Writing Tests

'), ('Steven')
-- This will return 1 and 2

Finally, you can even redirect the results of the OUTPUT


into an existing table. Keep all of these in mind the next
time you need to capture multiple identity values from a
single INSERT.
INSERT INTO dbo.IdentityTest (Name)
OUTPUT Inserted.IdentityValue
INTO SomeTableOfIdentityKeys
-- This table must already exist!

Not all developers will have


the opportunity to write test
questions. If youre fortunate
enough, seize the opportunity.
It can help improve your
understanding of a feature
(even if you think you know the
feature inside and out), and will
certainly benefit your written
communication skills.

For the Features of SQL Server Views question (question


6), the answer is B (The view has two issues that generate
errors). First, SQL Server views cant receive parameters.
Second, SQL Server views cant have an ORDER BY statement. You can place a WHERE clause on the query that uses
the view to restrict the rows, and you can also use an ORDER
BY statement on the query that uses the view. However, you
cannot use either feature inside a SQL Server view.

You can capture/redirect all


of the identity values that
SQL Server generates on
multi-row INSERT by using
the OUTPUT clause.

For the BETWEEN Statement question (question 7), the


answer is A (four rows). Some developers answer this
with B (six rows) and believe that the query will return
the names Canton and Celkins. Well, imagine if you
used the following query:
SELECT * FROM Purchasing.PurchaseOrderHeader
WHERE TotalDue BETWEEN 100 and 200
-- this won't return a TotalDue of 200.01

Would that query return a row where the Total Due is


$200.01? The answer is no, and hopefully you can see
the point. In both cases (the letter C and the value 200),

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

35

youre using a value thats less precise (or too general)


than the values SQL Server stores for that data type. Now
it should be obvious, since 200.01 is greater than 200
and therefore outside the range, and Canton is alphabetically after C and therefore also outside the range.

values (1, 'Dupe Entry')


SET IDENTITY_INSERT dbo.TestIdentity Off
select * from TestIdentity

The only way to guarantee uniqueness is to define a unique


constraint index, or define the column as a PRIMARY KEY.

The moral of the story here is


to be careful using BETWEEN,
especially when you specify
values that represent
some subset of the values
that SQL Server is storing.

The T-SQL MERGE Statement


If you work in data warehousing,
learn the MERGE statement.
Its part of the vocabulary of
data warehouse developers.

Some developers make this mistake with Date and DateTime values as well. Suppose, in this example, the OrderDate column stored DateTime values, and the table contained rows with an order date of 1-31-2010 at 2 PM. In
this example, SQL Server implicitly casts the value of 131-2010 as 2010-01-31 00:00:00.000, which means
that the query wont pick up orders throughout the day
on 1-31-2010. In other words, the query wont return any
rows beyond midnight for the ending date range.
SELECT * FROM Purchasing.PurchaseOrderHeader
WHERE OrderDate BETWEEN '1-1-2010'
AND '1-31-2010'
-- If OrderDate is a datetime, this won't return
-- a value of 1/31/2010 at 2 PM

For the Basic Syntax with VARCHAR question (question


9), the answer is zero rows. Some people might have
guessed two rows. Its time for a little story. A long time
ago, I was trying to debug a query that wasnt returning
any rows based on the parameter. Finally, I added a PRINT
statement to view the value of the parameter I was passing
into the function that was returning zero rows. The value
of the parameter was an empty string, even though I was
passing in a string. The reason was that I hadnt specified a length of the varchar parameter. If youll notice, in
the code snippet for this question, I intentionally specified
VARCHAR, and not VARCHAR(20) (or some other acceptable length). SQL Server permits you to define a varchar
without a length, though youd almost never do that.
Remember the old joke about the value of the comma?
Borrowing from that old joke, its the difference between
saying, Were going to eat, Kevin and Were going to
eat Kevin. SQL Server (just like any development tool)
does exactly what you tell it to do, not what you expect it
to do. Create a string variable without a length and SQL
Server allocates exactly how much you specified!

SQL Server does exactly what


you tell it to do, not what you
expect it to do. Create a string
variable without a length and
SQL Server allocates exactly
how many bytes you specified!

The moral of the story here is to be careful using BETWEEN, especially when you specify values that represent
some subset of the values that SQL Server is storing.
For the IDENTITY Values and Uniqueness question (question 8), the answer is no. Actually, its emphatically NO!
An IDENTITY column by itself will not guarantee uniqueness. A DBA or developer can turn off identity value generation and manually insert a row, even one for a duplicate
value. Granted, it would be wrong to do so, but the point is
that SQL Server wont constrain a column as unique simply
because you designated the column as an identity column.
SET IDENTITY_INSERT dbo.TestIdentity ON
INSERT INTO DBO.TESTIDENTITY
(IdentityValue, IdentityName)

For the NOT IN vs NOT EXISTS and Performance question (question 10)-over the years, theres been substantial debate about using EXISTS vs IN, and NOT EXISTS vs
NOT IN. Some argue that EXISTS provides more flexibility
when dealing with a check on multiple columns, whereas
others argue that IN is simpler if the query is only checking for an intersection with one value. Some argue that

Listing 1: Using the REPLACE function to refer to values in a table


ALTER function [dbo].[ReplacePOBox]
( @ColumnValue varchar(100) )
returns varchar(100)
as
begin
declare @POValues table
(NewPOValue varchar(100), OldPOValue varchar(100))
insert into @POValues values
('PO Box' , 'PO BOX'),
('PO Box' , 'P.O.BOX'),
( 'PO Box', 'P O BOX'),
( 'PO Box', 'P. O. BOX'),
('PO Box' , 'P.O. BOX'),
('PO Box' , 'P O BOX'),
('PO Box' , 'P.O BOX')

36

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

SELECT @ColumnValue =
replace( @ColumnValue,
POValues.OldPOValue,
POValues.NewPOValue)
from @POValues POValues
RETURN @ColumnValue
end
GO
UPDATE SomeTable SET AddrLine =
[dbo].[ReplacePOBox] (AddrLine)

codemag.com

Listing 2: Example of Composable DML with a MERGE statement


EffectiveDate, ExpirationDate)
VALUES (Src.ProductSKU, Src.ProductName,
Src.ProductPrice, 1,
Src.LastUpdate, null)

INSERT INTO dbo.DimProduct


( ProductSKU, ProductName, ProductPrice, CurrentFlag,
EffectiveDate, ExpirationDate)
SELECT MergeOutput.ProductSKU, MergeOutput.ProductName,
MergeOutput.ProductPrice, MergeOutput.CurrentFlag,
MergeOutput.LastUpdate, null FROM

WHEN MATCHED AND


Targ.CurrentFlag = 1 and
Src.ProductPrice <> Targ.ProductPrice
THEN UPDATE SET Targ.CurrentFlag = 0,
Targ.ExpirationDate = getdate()
OUTPUT $Action as DMLAction, Src.ProductSKU, Src.ProductName,
Src.ProductPrice, Src.LastUpdate, 1 AS CurrentFlag )
AS MergeOutput
WHERE DMLAction = 'UPDATE'

(
MERGE dbo.DimProduct Targ
USING dbo.OLTPProduct Src
ON Src.ProductSKU = Targ.ProductSKU
WHEN NOT Matched
THEN INSERT (ProductSKU, ProductName,
ProductPrice, CurrentFlag,

EXISTS generates better execution plans and performs


better, while proponents of IN point out that often times,
the execution plans are the same.
So, whats the answer here? Remember that the query in
the question was looking for rows that did NOT exist in
another result/query. As it turns out, SQL Server generates a better execution plan for NOT EXISTS if the subquery is dealing with columns that can contain NULL values.
Had you not permitted NULL values on the ProductKey
column, then theres a strong argument that the execution plans would have been the same.
So the moral of the story on this topic is that NOT EXISTS
usually performs significantly better if youre examining
nullable columns. Aside from that, results are about the
same, although you should always examine each possibility to confirm that. I personally have switched from using
IN to always using EXISTS/NOT EXISTS-not just for this
scenario but also because of the issue in the Subquery
Syntax and Results question.
For the Use of REPLACE question (question 11), the answer is yes (you can tell the REPLACE function to read
from a table). I might have answered incorrectly until a
few months ago. I had a project that required me to replace a large number of misspellings and mis-standardizations for words like PO Box. I thought I would need
one REPLACE statement for each possible value until I
discovered that I could populate a table with the before
and after values, and refer to them using a SELECT and a
REPLACE. Listing 1 shows an example of this.
For the Rules on Indexes question (question 12), items
A, D, and E are true statements. The reason answer B is
incorrect is because its the reverse of correct answer A.
The reason answer C is incorrect is because you can establish uniqueness on a column by creating a unique index.
For the Advanced Use of MERGE question (question 13),
you can still use the MERGE statement. Although the
WHEN MATCHED clause limits you to one DML statement,
you can also use the COMPOSABLE DML feature of MERGE
to output the results of an UPDATE (using the OUTPUT
clause), as shown in Listing 2:
Essentially, youre redirecting the OUTPUT of the MERGE
(for DMLActions of UPDATE only) back into the result set
pipeline, capturing them at the top of the query, and re-

codemag.com

directing as an INSERT back into the Product Dimension.


Its very powerful!
INSERT INTO dbo.DimProduct (Fields)
SELECT MergeOutput. Columns FROM (
MERGE dbo.DimProduct Targ
USING dbo.OLTPProduct Src
ON Src.ProductSKU =
Targ.ProductSKU
WHEN NOT Matched THEN INSERT (values)
WHEN MATCHED THEN UPDATE
(retire old rows)
OUTPUT $Action as DMLAction, (Source columns),
1 AS CurrentFlag )
AS MergeOutput
WHERE DMLAction = 'UPDATE'

Final Thoughts

When I speak at community events or conferences, I usually


end a session by asking attendees to raise their hands if they
learned more than one fact or concept from the presentation. I certainly hope this article provided a few benefits to
every reader, either to help with a programming challenge or
to increase understanding of a particular SQL Server feature.
Readers can certainly feel free to incorporate these topics
into any interview test questions theyre writing.
Fellow CODE Magazine author Ted Neward and I have both
written and blogged about the benefits of speaking and
writing about technical features, and how that process
enhances a persons understanding. Taking a technical
test, even for an informal academic exercise, can sometimes help a developers thought process. So think of this
as the CODE Magazine version of a New York Times crossword puzzle!
As I stated at the beginning of the article, Im going to
use this format for some of my Bakers Dozen articles in
the future. In my next column, Ill cover some more advanced T-SQL and database topics. Down the road, I might
expand it to cover test-style questions for other tools in
the SQL Server Business Intelligence stack, since tools
like Reporting Services and Integration Services continue
to grow in use and popularity. Stay tuned!



Kevin S. Goff

The Bakers Dozen: A 13-Question Pop Quiz of SQL Server Items

37

ONLINE QUICK ID 1511071

Building a Weather App Using


OpenWeatherMap and AFNetworking
When building mobile applications, developers often get the opportunity to take advantage of third-party libraries, tools, and
components to expedite the development process. These tools serve as invaluable assets when a budget doesnt allow for the
time to write robust components from scratch. With the rise in popularity of smart phones and apps over the past five years,
more and more developers encounter the same problems
and features. As a result, the solutions to these common
problems, like retrieving a weather forecast, get molded
into open source libraries and APIs for developers to take
advantage of.

Jason Bender

Throughout the course of this article, youll develop an


iOS class to get current and forecasted weather information using the OpenWeatherMap API and a popular open
source client-side iOS library called AFNetworking.

Jason.bender@rocksaucestudios.com
www.jasonbender.me
www.twitter.com/TheCodeBender

Open Weather Map

Jason Bender is the Director


of Development for Rocksauce
Studios, an Austin, Texas-based
mobile design and development
company. Hes been coding
since 2004 and has experience
in multiple languages including
Objective-C, Java, PHP, HTML,
CSS, and JavaScript. With a
primary focus on iOS, Jason has
developed dozens of applications, including a #1 ranking reference app in the US. Jason was
also called upon by Rainn Wilson
to build an iOS application for his
wildly popular book and YouTube
channel: SoulPancake.

OpenWeatherMap (openweathermap.org) is a publicaccess weather API that serves over a billion forecast
requests per day. Powered by the OWM platform, OpenWeatherMap provides a variety of weather details for over
200,000 cities worldwide using data from over 40,000
weather stations. Using this API, developers can easily
access current weather information, hour-by-hour forecasts, or even daily forecasts up to 16 days in advance.
The service offers flexibility as well, allowing responses to
be retrieved in XML, JSON, or HTML.
For the purposes of this demonstration, youll work with
the current weather and weekly seven-day forecast using
JSON as the response type. You can access the documentation for the various API endpoints offered, including
ones not covered in this article, at http://openweathermap.org/api.

If you need more calls per minute, increased up time,


more frequent weather updates, or secured access, youll
have to jump up to the first paid tier, which costs $180/
month. Regardless of which tier you decide to use, you
need to register an API key for your application. To do
this, register your application at http://openweathermap.org/appid by signing up for an account. Once logged
in, you can find the key under the setup tab. Since the API
is public access, the key needs to be added as a parameter
to whatever API endpoint you send a request to (demonstrated in the next section).

The Endpoints

As mentioned earlier, youll work with two of the endpoint


offerings from OpenWeatherMap: one that gets current
weather and the other than gets a seven-day forecast.
When dealing with either of these endpoints, pass the location you wish to retrieve weather for. You can pass that
location data in a variety of ways:
Pass the city ID for the area (a list of city IDs can be
downloaded at http://bulk.openweathermap.org/
sample/)
Send geographical coordinates for the location (latitude and longitude)
Use a zip code and country code (use ISO 3166
country codes)
The base API request URL to get current weather in a
given location looks like this:
http://api.openweathermap.org/data/2.5/weather

OpenWeatherMap provides
a variety of weather details
for over 200,000 cities
worldwide using data from
over 40,000 weather stations

API Usage

The OpenWeatherMap API is free to use for smaller-scale


applications. Large applications with bigger user bases
may need to opt into one of the paid plans in order to
satisfy the higher callper-minute needs. The free tier
comes with the following restrictions:
1,200 API calls per minute
Weather history no greater than one day old
95% up time
Weather data updates every 2 hours
No SSL

40

Building a Weather App Using OpenWeatherMap and AFNetworking

The base URL to get the future weather forecast for a set
amount of days, and a set location, looks like this:
http://api.openweathermap.org/data/2.5/
forecast/daily

In mobile applications that track a users location, the


most readily available location metric is usually latitude
and longitude. Therefore, for the purpose of this demonstration, youll send the location data to the API as geographical coordinates. To query either of the above endpoints using latitude and longitude, add lat and long
variables to the request by appending the following onto
one of the above URLs:
?lat={latitude val}&lon={longitude value}

Furthermore, when calling forecast/daily, you can request a specific number of days to get a forecast for. In
this demo, youll ask for seven days worth of data by ap-

codemag.com

pending cnt=7 to the URL. Dont forget that you need to


include your API key. All together, the URL request to get
a seven-day forecast in a specific location with a specific
lat/long looks like this:
http://api.openweathermap.org/data/2.5/
forecast/daily?lat=-16.92&lon=145.77&cnt=7
&APPID={APIKEY}

The Data

Once one of the forecast requests is sent, the service responds with the corresponding information. The default

response type is JSON, but you can opt to receive information in XML or HTML, depending on your needs. Lets
take a look at the response format for the two calls previously explained. Listing 1 shows a sample JSON response
object from the call to get current weather for Cairns, AU,
and Listing 2 shows the response format of a daily forecast call for Shuzenji, JP.
The default metric for numerical values from the API,
and also the metric used in the sample responses shown
in Listing 1 and 2, is Kelvin. You can format the call to
the API to return data in Imperial (Fahrenheit) or Metric

Figure 1: Initial project settings when creating the sample project in Xcode
Listing 1: JSON response for current weather info from OpenWeatherMap
{
"coord":{ "lon":145.77, "lat":-16.92},
"weather":[{
"id":803,
"main":"Clouds",
"description":"broken clouds",
"icon":"04n"
}],
"base":"cmc stations"
"main": {
"temp":293.25,
"pressure":1019,
"humidity":83,
"temp_min":289.82,
"temp_max":295.37
},
"wind::{

codemag.com

"speed":5.1,
"deg":150
},
"clouds":{"all":75},
"rain":{"3h":3},
"dt":1435658272,
"sys":{
"type":1,
"id":8166,
"message":0.0166,
"country":"AU",
"sunrise":1435610796,
"sunset":1435650870},
"id":2172797,
"name":"Cairns",
"code":200
}

Building a Weather App Using OpenWeatherMap and AFNetworking

41

Listing 2: JSON forecasted weather response info from OpenWeatherMap


{

"min":293.52,
"max":297.77,
"night":293.52,
"eve":297.77,
"morn":297.77

"code":"200",
"message":0.0032,
"city":{
"id":1851632,
"name": "Shuzenji",
"coord":{
"lon":138.933334,
"lat":34.966671
},
"country":"JP"
},
"cnt":10,
"list":[{
"dt":1406080800,
"temp":{
"day":297.77,

},
"pressure":925.04,
"humidity":76,
"weather":[{
"id":803,
"main":"Clouds",
"description":"broken clouds",
"icon":"04d"
}],
}]
}

Listing 3: Weather.mvariable parameters


@interface Weather : NSObject

@property (nonatomic, strong) NSString* condition;

/// the date this weather is relevant to


@property (nonatomic, strong) NSDate *dateOfForecast;

/// min/max temp in farenheit


@property (nonatomic) int temperatureMin;
@property (nonatomic) int temperatureMax;

/// the general weather status:


/// clouds, rain, thunderstorm, snow, etc...
@property (nonatomic, strong) NSString* status;

/// current humidity level (perecent)


@property (nonatomic) int humidity;

/// the ID corresponding to general weather status


@property (nonatomic) int statusID;

/// current wind speed in mph


@property (nonatomic) float windSpeed;

/// a more descriptive weather condition:


/// light rain, heavy snow, etc...

@end

(Celsius) as well. Date and time stamps use the UNIX epoch format (the number of seconds elapsed since January 1, 1970). In this example, youll convert those times
to the devices local time zone and store them as NSDate
objects. Notice that the weather component of each response has an id and an icon field. The ID is unique to
each specific weather condition in the system and the
icon coordinates to a graphical representation of the
corresponding condition. In Listing 1, for example, the
icon field maps to a value of 04n. Using the URL format provided in the documentation, you can access an
image icon representing the current weather condition
at http://openweathermap.org/img/w/04n.png. Further documentation on all weather conditions and icon
mappings can be found at http://openweathermap.org/
weather-conditions.
The list array in Listing 2 is the most noticeable difference between the two response formats. That list
contains an entry for each day of the forecast. In this
example, youll ask for a seven-day forecast, so that list
array has seven entries, whereas Listing 1 only shows
detailed information for the current date and time.

AFNetworking
AFNetworking is an open-source networking library for
iOS and Mac OSX. Its built atop the URL Loading System

42

Building a Weather App Using OpenWeatherMap and AFNetworking

found in the Foundation framework and offers a featurerich API that includes many useful features such as serialization, support for reachability, and various UIKit
integrations. Its currently one of the most widely used
open-source iOS projects with nearly 20k stars, 6k forks,
and 240+ contributors on Github (https://github.com/
AFNetworking/AFNetworking).

AFNetworking is currently
one of the most widely used
open-source iOS projects with
nearly 20k stars, 6k forks, and
240+ contributors on Github.

The latest version of AFNetworking (2.0) requires iOS 7


or greater, relying heavily on the use of NSURLSession,
which is now preferred over the older NSURLConnection.
In this example, youll use AFHTTPRequestOperationManager, which encompasses all the common use cases
for communicating with a Web application over HTTP,
including built in calls for your standard GET, POST, PUT,
and DELETE requests. Youll get further into the syntax
and how to implement the library during the coding exercise in the next section.

codemag.com

Lets Code It

In this code exercise, youll write a WeatherRadar class


that can fetch either current weather conditions or an
extended weekly weather forecast. The WeatherRadar
class uses callbacks to return either a weather object or
an NSArray of weather objects, depending on what API
call is made. This portable library can get dropped into
any project that needs to gather weather information.

Getting Started

The first thing youll want to do is create a new Xcode


project for this exercise. To do this, launch Xcode and click
Create a new Xcode project. Select Single View Application and press Next. Finally, fill in a project name and
match the other settings to the screenshot in Figure 1.
Once your settings match Figure 1, click Next. You should
now have a blank Xcode project. The only other thing you

need to set up before you get coding is AFNetworking.


Add AFNetworking to the project using CocoaPods, one of
the simplest methods to add third-party content to any
iOS project. For the sake of this walkthrough, Ill assume
that you have a working understanding of how to install
a library using CocoaPods (If not, please refer to the sidebar for additional information on how to use CocoaPods
or where you can locate additional installation instructions). Once youve created a Podfile, add the following
line to the file and save it:
pod "AFNetworking", "~> 2.0"

Once saved, launch terminal and navigate to the root folder for your project. Make sure that the project is closed in
Xcode and run the following command in terminal:
pod install

Listing 4: WeatherRadar.msend GET request to specified URL


- (void)fetchWeatherFromProvider:(NSString*)URL completionBlock:
(void (^)(NSDictionary *))completionBlock {

} else {
// handle no results
}
} failure:^(AFHTTPRequestOperation*
operation, NSError *error) {
// handle error
}

AFHTTPRequestOperationManager *manager =
[AFHTTPRequestOperationManager manager];
[manager GET:URL parameters:nil success:
^(AFHTTPRequestOperation* operation, id responseObject) {
if (responseObject) {
completionBlock(responseObject);

];
}

Figure 2: Settings when creating the weather object model to store weather data

codemag.com

Building a Weather App Using OpenWeatherMap and AFNetworking

43

This automatically downloads the AFNetworking library


and creates an .xcworkspace file. Re-open your project by
double clicking that new project file.

strate how to interface with the API. You can follow the
same structure to store any of the remaining data points
that arent used in this sample.

Weather Model

Open the Weather.h file you just generated. Add the


properties for the data youll track, as shown in Listing
3. These properties map to the various data points that
get returned by the API; the comments in Listing 3 offer
more detail on what each property represents.

The next thing you want to add to the project is a weather


object model that holds the information you receive from
the OpenWeatherMap API. Click File->New->File and then
choose Cocoa Touch Class. Next, choose NSObject and
name the object Weather, as shown in Figure 2.
Once that has been created, youll need to add various
properties to correspond to the weather data you wish to
collect. For the purpose of this example, only the basic
weather attributes will get captured in order to demon-

The next step involves creating the radar class that gathers the information needed to populate this weather
object. That radar class retrieves the weather data as a
JSON object. Once you code that (in the next section),
youll loop back and create an initializer method on this

Using CocoaPods
The official CocoaPods website
can be found at http://
cocoapods.org/.
Additionally a comprehensive
tutorial on how to use CocoaPods
can be found at http://www.
raywenderlich.com/64546/
introduction-to-cocoapods-2.

Figure 3: The sample application using the weather library from this article
Listing 5: WeatherRadar.hdeclare weather radar functions headers
/**
* Returns weekly forecasted weather conditions
* for the specified lat/long
*
* @param latitude
Location latitude
* @param longitude
Location longitude
* @param completionBlock Array of weather results
*/
- (void)getWeeklyWeather:(float)latitude longitude:(float)longitude
completionBlock:(void (^)(NSArray *))completionBlock;

44

Building a Weather App Using OpenWeatherMap and AFNetworking

/**
* Returns realtime weather conditions
* for the specified lat/long
*
* @param latitude
Location latitude
* @param longitude
Location longitude
* @param completionBlock Weather object
*/
- (void)getCurrentWeather:(float)latitude longitude:(float)longitude
completionBlock:(void (^)(Weather *))completionBlock;

codemag.com

Listing 6: WeatherRadar.mweather radar function implementations


- (void)getWeeklyWeather:(float)latitude longitude:(float)longitude
completionBlock:(void (^)(NSArray *))completionBlock {

initWithDictionary:weather isCurrentWeather:FALSE];
[weeklyWeather addObject:day];
}

// formulate the url to query the api to get the 7 day


// forecast. cnt=7 asks the api for 7 days. units = imperial
// will return temperatures in Farenheit
NSString* url = [NSString stringWithFormat:
@"http://api.openweathermap.org/
data/2.5/forecast/daily?units=imperial
&cnt=7&lat=%f&lon=%f", latitude, longitude];
// escape the url to avoid any potential errors
url = [url stringByAddingPercentEscapesUsingEncoding:
NSUTF8StringEncoding];

completionBlock(weeklyWeather);
}];
}
- (void)getCurrentWeather:(float)latitude longitude:(float)longitude
completionBlock:(void (^)Weather *))completionBlock {
// formulate the url to query the api to get current weather
NSString* url = [NSString stringWithFormat:
@"http://api.openweathermap.org/
data/2.5/weather?units=imperial
&cnt=7&lat=%f&lon=%f", latitude, longitude];

// call the fetch function from Listing 4


[self fetchWeatherFromProvider:url completionBlock:
^(NSDictionary * weatherData) {
// create an array of weather objects (one for each day)
// initialize them using the function from listing 7
// and return the results to the calling controller
NSMutableArray *weeklyWeather =
[[NSMutableArray alloc] init];
for(NSDictionary* weather in weatherData[@"list"]) {
// pass false since the weather is a future forecast
// this lets the init function know which format of
// data to parse
Weather* day = [[Weather alloc]

weather model to take the JSON data and parse it into the
appropriate properties that you just created.

Weather Radar

Just like you previously did with the weather model, go to


File->New->File and select NSObject to create a new class.
Name it WeatherRadar and click Next and then Save. This
WeatherRadar class directly integrates with OpenWeatherMap to get the weather information. As mentioned previously, AFNetworking handles the call for data, so you
need to write a function using that library to make a GET
request to the specified API URL. You use an AFHTTPRequestOperationManager, a component of AFNetworking,
to make that request. Listing 4 demonstrates the function implementation in WeatherRadar.m and how to use
the operation manager to formulate the query.
Notice that the function definition takes a URL parameter. This URL either points to the API endpoint for todays real-time weather or to the endpoint for the weekly
forecast, depending on which information you request. By
making the URL a parameter rather than part of the function, youre able to reuse this function to make both API
calls, rather than needing to duplicate code. Also notice
that theres a callback block that passes an NSDictionary
to the calling function. That dictionary contains the JSON
response from the API.
Youve now primed the radar class to communicate with
the API. Next, you need methods that formulate the
appropriate URLs to feed to the fetchWeatherFromProvider call defined in Listing 4. To do this, you write one
method for each API call that you want to format, which
is two in this case. Before doing that, switch over to

codemag.com

// escape the url to avoid any potential errors


url = [url stringByAddingPercentEscapesUsingEncoding:
NSUTF8StringEncoding];
// call the fetch function from Listing 4
[self fetchWeatherFromProvider:url completionBlock:
^(NSDictionary * weatherData) {
// create an weather object by initializing it with
// data from the API using the init func from Listing 7
completionBlock([[Weather alloc]
initWithDictionary:weatherData isCurrentWeather:TRUE]);
}];
}

WeatherRadar.h and add the import in the next snippet


to the top of the file. You need this import because the
functions you write return instances of the Weather model
that you previously created.
#import "Weather.h"

The main functions of the radar class need to be publicly


accessible so you can call them from anywhere in the application that you need weather data. Therefore, define
the functions in WeatherRadar.h as shown in Listing 5 so
other classes can access them. Listing 5 defines the function headers but you still need to write the implementation for each of the corresponding functions. You do this
in WeatherRadar.m and it should resemble the code from
Listing 6 when completed.
Notice that both of the functions from Listing 6 receive
data from a call to fetchWeatherFromProvider and return
it using a completion block callback. The callbacks use
corresponding functions located in the weather model
that transpose the JSON data from the API into weather
objects that the application can better work with. The
next section covers the implementation of that functionality.

Back to the Weather Model

Youve almost completed the circle. You started by creating


a weather model object, wrote a radar class to get the data
to populate that object, and now you need the final logic
that maps that data from the API results into the model.
This logic needs to account for the variable formatting
between the results of the two API calls you implemented
(the variation seen by comparing the results from Listing

Building a Weather App Using OpenWeatherMap and AFNetworking

45

1 and Listing 2). Move back to your Weather.h file and


declare the following initialization function header:
/**
* initialize this object using the JSON data
* passed in the dictionary paramter
*
* @param dictionary
JSON data from API
* @param isCurrentWeather BOOL (FALSE=Weekly)
*
* @return instance of self
*/
- (instancetype)initWithDictionary:
(NSDictionary *) dictionary
isCurrentWeather:(BOOL)isCurrentWeather;

You probably noticed from the code you wrote in Listing


6 that both getCurrentWeather and getWeeklyWeather
use the same initialization function (the one you just
declared in the previous snippet). However, as just mentioned, the data gathered by those calls has different
formats. To compensate, the isCurrentWeather Boolean
is added as a function parameter. Pass TRUE and the initialization function parses the JSON dictionary using the
current weather format (Listing 1). Pass FALSE and it
adheres to the weekly weather data format (Listing 2).
This initialization breakdown and full function implementation is demonstrated in Listing 7. In order for the
code in Listing 7 to function properly, you also need to
add the following helper function to convert the data to
local time.

Listing 7: Weather.mweather object initialization function


- (instancetype)initWithDictionary:(NSDictionary *)dictionary
isCurrentWeather:(BOOL)isCurrentWeather{
self = [super init];

[dictionary[@"temp"][@"max"] intValue];
if (temperatureMax) {
_temperatureMax = temperatureMax;
}

if (self) {
/*
* Parse weather data from the API into this weather
* object. Error check each field as there is no guarantee
* that the same data will be available for every location
*/

int humidity =
[dictionary[@"humidity"] intValue];
if (humidity) {
_humidity = humidity;
}

_dateOfForecast = [self utcToLocalTime:[NSDate


dateWithTimeIntervalSince1970:
[dictionary[@"dt"] doubleValue]]];

float windSpeed =
[dictionary[@"speed"] floatValue];
if (windSpeed) {
_windSpeed = windSpeed;
}

// use the bool to determine which data format to parse


if (isCurrentWeather) {
int temperatureMin =
[dictionary[@"main"][@"temp_min"] intValue];
if(temperatureMin) {
_temperatureMin = temperatureMin;
}

}
/*
* weather section of the response is an array of
* dictionary objects. The first object in the array
* contains the desired weather information.
* this JSON is formatted the same for both requests
*/
NSArray* weather = dictionary[@"weather"];
if ([weather count] > 0) {
NSDictionary* weatherData = [weather objectAtIndex:0];
if (weatherData) {
NSString *status = weatherData[@"main"];
if (status) {
_status = status;
}

int temperatureMax =
[dictionary[@"main"][@"temp_max"] intValue];
if (temperatureMax) {
_temperatureMax = temperatureMax;
}
int humidity =
[dictionary[@"main"][@"humidity"] intValue];
if (humidity) {
_humidity = humidity;
}

int statusID = [weatherData[@"id"] intValue];


if (statusID) {
_statusID = statusID;
}

float windSpeed =
[dictionary[@"wind"][@"speed"] floatValue];
if (windSpeed) {
_windSpeed = windSpeed;
}

NSString *condition = weatherData[@"description"];


if (condition) {
_condition = condition;
}

}
else {
int temperatureMin =
[dictionary[@"temp"][@"min"] intValue];
if (temperatureMin) {
_temperatureMin = temperatureMin;
}

}
}
}
return self;

int temperatureMax =

46

Building a Weather App Using OpenWeatherMap and AFNetworking

codemag.com

/**
* Takes a unic UTC timestamp and converts it
* to an NSDate formatted in the device's local
* timezone
*
* @param date Date to be converted
*
* @return
Converted date
*/
-(NSDate *)utcToLocalTime:(NSDate*)date {
NSTimeZone *currentTimeZone =
[NSTimeZone defaultTimeZone];
NSInteger secondsOffset =
[currentTimeZone secondsFromGMTForDate:date];
return [NSDate dateWithTimeInterval:
secondsOffset sinceDate:date];
}

Youve now completed both the WeatherRadar and Weather classes. Simply initialize a WeatherRadar object from
anywhere in your application and call either getCurrentWeather or getWeeklyWeather to retrieve the desired
weather information.

Practical Application

The code you just wrote is a portable library that can be


dropped into any application; just add the WeatherRadar and Weather classes to the project and import the
AFNetworking library. To demonstrate the practicality of
the library, Ive used it to create a demo application that
visually displays the weekly forecast. Figure 3 depicts
the application UI. The center of the screen highlights
todays forecast and the bottom of the screen uses a UIScrollView to show the remainder of the week. Users can
slide back and forth on the bottom portion of the screen
to scan the days. It has a visual icon that correlates to
the weather condition and is accompanied by the high
and low forecasted temperatures for the corresponding
day. The code included with this article not only encompasses the weather library covered previously, but it also
includes all code for this sample application. This should
help you understand exactly how to call the library and
how to apply the results.

AFNetworking on Github
The official github for
AFNetworking can be found
at https://github.com/
AFNetworking/AFNetworking
Full documentation for
AFNetworking can be found at
http://cocoadocs.org/docsets/
AFNetworking/2.6.0/

Wrapping Up
I hope that over the course of this article, you gained
a basic understanding of the services offered by OpenWeatherMaps API and how to use those to add weather
forecasting to any iOS application. If you need a more
complex weather breakdown, be sure to visit OpenWeatherMap.com for full API documentation. The sidebars for
this article have links to additional resources that may be
of use as well.



codemag.com

Jason Bender

Building a Weather App Using OpenWeatherMap and AFNetworking

47

ONLINE QUICK ID 1511081

Visual Studio 2015:


Ushering in a New Paradigm
Microsoft just released version 14 of its flagship Integrated Development Environment (IDE), Visual Studio 2015. This
version retains fantastic backward-compatibility to the majority of development technologies and frameworks that have
come before it, since .NET was first created. In addition to supporting past development, Visual Studio 2015 makes giant
leaps forward in capabilities and integration. This article
covers:
The new Microsoft development philosophy
Visual Studio support for modern development
The new CoreCLR and DNX runtime
Moving from VS 2013 to VS 2015

Jeffrey Palermo
jeffrey@clear-measure.com
JeffreyPalermo.com
@JeffreyPalermo
Jeffrey Palermo is a Managing Partner and CEO of Clear
Measure, an expert software engineering firm based in Austin,
TX with employees across North
America. Jeffrey has been recognized as a Microsoft MVP for
10 consecutive years and has
spoken at national conferences
such as Tech Ed, VS Live, and
DevTeach. Hes the purveyor of
the popular Party with Palermo
events and is the author of
several books, articles, and
hundreds of pithy Twitter quips.
A graduate of Texas A&M University, an Eagle Scout, and an
Iraq war veteran, Jeffrey likes
to spend time with his family of
five out at the motocross track.

Microsoft Adopts a New Philosophy


Microsoft has always been very good at providing worldclass developer tooling. Going back to the 1990s when
developers were largely creating client-server applications on Windows 3.1 and Windows 95, Microsoft has
worked hard to make it easy for developers to automate
business processes and to make useful applications for
customers and management. Visual Studio 2015 is no
exception to that history, but this release marks a huge
departure from a long-held truth: Microsoft wanted you
to build applications for Windows.
In fact, many conferences were devoted to topics such as
cross-browser, which really meant that a Web application

would work on any version of Internet Explorer. And some


spoke about cross-platform, which really meant that an
application would work on multiple versions of Windows.
With Visual Studio 2015, Microsoft supports developing
for platforms other than Windows.

Any Developer, Any Platform, Any Kind of App

The slogan published on some prominent Visual Studio


Web properties proclaims any developer, any platform,
any kind of app. This is a big departure from just a few
short years ago when company executives were pushing
Windows to be able to run any kind of application. The
push of the past encouraged developers to deploy any
kind of application to Windows.
Visual Studio 2015 supports development for iOS, Android, Mac, Linux, and Windows. It also continues support
for the many flavors of Web development that have come
and gone over the past twenty years, such as ASP, ASP.
NET, Python, Node.js, and more. In addition to the broad
platform support, Visual Studio 2015 introduces new and
specific tooling for Windows 10 with project types called
Universal Windows applications. Figure 1 shows a new

Figure 1: The Universal Windows project options

48

Visual Studio 2015: Ushering in a New Paradigm

codemag.com

Tool

Description

Location

.NET Execution Environment (DNX)

Contains the code required to bootstrap and run an application including the compilation system, SDK
tools, and the native CLR hosts

OSS/GitHub

.NET SDK Manager

A set of command line utilities to update and configure which runtime (DNX) to use

OSS/GitHub

Entity Framework

Microsofts recommended data access technology for new applications in .NET

OSS/GitHub

Visual Studio 2015 Tools for Docker

Enables the developer to build and publish an ASP.NET 5 Web or console application to a Docker
container running on a Linux or Windows virtual machines

VSIX/Extension

TypeScript

A superset of JavaScript that compiles to clean JavaScript output

OSS/GitHub

Table 1: Microsoft tools and frameworks provided as extensions or Nuget packages

Tool

Description

Location

Community

Has all the features of Express and more, and is still free for individual developers, open source
projects, academic research, education, and small professional teams

Community

Express for Desktop

Supports the creation of desktop applications for Windows

Community

Express for Web

Creates standard-based, responsive websites, Web APIs, or real-time online experiences using ASP.NET

Community

Express for Windows 10

Provides the core tools for building compelling, innovative apps for Universal Windows Platform.
Windows 10 is required.

Community

Team Foundation Server 2015 Express

Free source-code-control, project management, and team collaboration platform

Community

Professional w/ MSDN

Professional developer tools and services for individual developers or small teams

Professional

Enterprise w/ MSDN

Enterprise-grade solution with advanced capability for teams working on projects of any size or
complexity, including advanced testing and DevOps

Enterprise

Code

Code editing redefined. Build and debug modern Web and cloud applications. Code is free and
available on your favorite platform: Windows, Max OS X, or Linux

Non-Windows Web

Table 2: Visual Studio 2015 comes in an edition for everyone


Universal node within the Windows node in the New Project dialog of Visual Studio 2015.
In addition to support for the modern platforms and the
new Windows, clearly Microsoft is preparing for the tectonic shift that is the DevOps movement. Although this
release doesnt provide continuous delivery or automated
operations out of the box, the feature set and guidance
on www.visualstudio.com clearly show us that Microsoft
understands the movement in the industry away from a
world where we name servers and maintain them, to a
world where we provision a server for use and then destroy the server when its time for the next minor release
of the application.

Visual Studio 2015 is for


any developer, any platform,
any kind of app,
proclaims Microsoft.

Visual Studio has long provided features around deployments and the management of server environments.
Visual Studio Online even keeps analytics and recordkeeping around deployment processes moving through
an organization. These packages have histories going
back ten years or more, but much of this historical tooling requires the Visual Studio GUI to be useful. Very
little of it was supported for execution from the command line.

codemag.com

With Visual Studio 2015, the provisioning and deployment tooling executes through the output windows with
chained PowerShell commands. And tooling planned for
release early next year also makes use of PowerShell
for the actual work. With this change to how Microsoft
composes automated tooling, developers have every
tool available to them for inclusion in continuous integration builds and in continuous delivery deployment
pipelines.
Microsoft has also changed its mindset over the years regarding third-party tools. Visual Studio has long provided
some extensibility points for other companies to develop
add-ons for the IDE, but seven years ago, Microsoft first
shipped some open-source libraries directly with Visual
Studio. The 2015 edition takes this approach to the extreme by moving some of its own products to GitHub.com
as the official source repository and distributing them
as extensions to Visual Studio. Table 1 shows some of
the Microsoft products that are fully open-sourced and
shipped via a Visual Studio extension or as a library
through Nuget. The descriptions come straight from the
relevant documentation for the tool.

A Visual Studio for Every Developer

With Visual Studio 2015, Microsoft has branched out and


broadened the availability of the Visual Studio editions.
Professional developers have two choices based on need,
but the big news is the story for students, non-profit
developers, hobbyists, and website developers on Linux
and Mac. Table 2 shows the Visual Studio 2015 editions.
The descriptions come from the respective documentation.

Visual Studio 2015: Ushering in a New Paradigm

49

A quick trip to https://www.visualstudio.com/ reveals the


options for downloading, installing, using, and purchasing the appropriate edition for you. With eight editions
to choose from, its easy to be confused at first. However, Microsoft is transitioning the Express products to
the background with the launch of the Community edition that includes most of the features of Professional,
including the ability to support extensions like JetBrains
ReSharper. The Express editions still dont support extensions and arent even outlined in detail in some documentation describing feature differences. Figure 2 shows the
feature matrix in Microsofts documentation outlining the
depth of feature areas.
Some of the attention-getting features like CodeLens
have moved to the Professional edition in 2015. Ill cover

that feature, and others, in detail below. Those integrating with Visual Studio Online to use the automated load
testing capabilities will need to use the Enterprise edition, but most types of applications can be easily developed with the Professional edition.

Visual Studio Support for Modern


Development
Software development in the mid 2010s is very different
than it was in 2005 when Agile software development
principles were prominent in the discussion. Extreme
Programming ushered in many advances in software automation and Scrum brought large shifts in thinking to
the project-management community. Visual Studio 2015
embraces the changes that have become known as mod-

Figure 2: The detailed feature matrix for the main Visual Studio 2015 editions
ern development practices and modern architectures.
Ten years ago, business applications had largely moved
from desktop client-server applications to Web applications running on large and small Web server farms and
used through Web browsers. Responsive Web techniques
were young, and the entirety of the application ran in the
website processes save for some heavy off-line processing
that had to be accomplished in the form of a batch job or
series of batch jobs.

Figure 3: Two students use Continuous Integration in a


training class

50

Visual Studio 2015: Ushering in a New Paradigm

Today, developers understand that the architecture has


to support any number of user interfaces, be online almost 24/7 under stressful load, and allow any part of the
application to be fixed, upgraded, or revved at any hour
of the day. In this environment, monolithic applications
fall down on the job. Modern software systems have online and offline processes, multiple Web sites, multiple
databases, multiple integrations, and multiple production

codemag.com

Figure 4: ASP.NET 5 provides new project templates

The eventual destination


of a modern software system
is cloud infrastructure,
whether its Amazon AWS,
Microsoft Azure,
or Windows Azure Pack
hosted by Rackspace.

and pre-production server environments. Continuous Integration, a technique that was new ten years ago, is an
expected staple of the development process now. Strenuous suites of automated tests pound on every build of the
application, which now must be called a software system
because its made up of multiple autonomous application components, each running in a separate process and
able to be run on separate servers or cloud infrastructure.
The ability to change and deploy any part of the software system is the hallmark of modern development.
Visual Studio has improved integration across the board
with tools that help make this need a reality. Although
continuous delivery is a new concept for most teams, the
entire industry is moving toward it out of necessity, and
Visual Studio 2015 plays well in that environment. The
eventual destination of a modern software system is cloud
infrastructure, whether its Amazon AWS, Microsoft Azure,
or Windows Azure Pack hosted by Rackspace.

Continuous Delivery Workflow

The last mile of Agile software development is Continuous Delivery. Continuous Integration and short devel-

codemag.com

Figure 5: After signing in to Azure, configure the subscription for this application.
opment iterations are the normal staples of developer
education in 2015. Figure 3 shows a developer training
class where Continuous Integration is mentioned briefly
on the way to deeper topics. Developers routinely work
side-by-side in close collaboration while developing
software.

Visual Studio 2015: Ushering in a New Paradigm

51

Figure 6: The new ASP.NET 5 application running locally


Continuous delivery is an extension of continuous integration as agile practices move toward production operations. The popular umbrella term, DevOps, now is often
used to describe this group of practices. DevOps is the

We treat server infrastructure


as cattle rather than as pets,
naming them and maintaining
them until some big
end-of-life event.

Figure 7: The publish command works as expected.

52

Visual Studio 2015: Ushering in a New Paradigm

process of automating software system operations. Monitoring, alerting, and issue resolution are all tasks that
operations professionals have long performed. The DevOps movement quickly automated all of these, creating automated systems of operating complex software systems.
Deploying is one task that falls under the umbrella of operations. Continuous Delivery is the process of deploying
so frequently to production that releasing new revisions
of the software becomes a concept that is less and less
interesting. It removes the notion of a maintenance windows or a release event. Organizations all release software at various frequencies, even if all of them employ
Continuous Delivery.

Were excited to be working


with Docker, enabling our
existing .NET customer base
looking to adopt Windows
Containers with Visual Studio
as well as expanding our
customer base using Linux
Containers with Visual Studio
Code on the Mac. Steve
Lasker, Program Manager,
Microsoft Cloud Platform Tools

Infrastructure as Code is an important concept in Continuous Delivery. This concept is the tag line for automatically provisioning servers from code stored in source
control right beside the application code. Rather than de-

codemag.com

ploying new application components to existing servers,


Infrastructure as Code calls for new servers to be JustIn-Time provisioned, fresh for the new application components. Then, when the smoke testing confirms that the
new application version is stable, networking changes can
take the existing production components offline and swap
them out with the new servers. After a cautionary period,
the old production servers can be decommissioned and
deleted. In this fashion, we treat server infrastructure as
cattle rather than as pets, naming them and maintaining
them until some big end-of-life event.
Visual Studio 2015 and the corresponding extensions
provide support for the upcoming Container technology that promises to usher in Continuous Delivery for
the early majority and late majority of adopters as the
method is adopted. Steve Lasker is a program manager
at Microsoft who works on Container tooling support for
Visual Studio and Visual Studio Code. He writes on his
blog at http://blogs.msdn.com /stevelasker/ about Visual Studio Tools for Docker. The following tooling walkthrough for Visual Studio 2015 is a product on which he
and his team are focusing. Ill now walk through a major
new Visual Studio capability.

Figure 8: Docker Containers are now options for publish destinations.

The next section walks you through deploying an ASP.


NET 5 application to a Docker Container running in Microsoft Azure. First, youll create a new ASP.NET 5 application using the New Project dialog youre familiar with.
Figure 4 shows the new project types.
Because youve chosen to host in the cloud, configure the
Azure component as shown in Figure 5.
When you run the application with CTRL+F5, the new sample application has been created, as shown in Figure 6.
These initial screens will be very familiar to those used to
Visual Studio 2013. The next part of the workflow shows
the striking difference in integration technique with
Azure and the new philosophy being baked into the tooling. Figure 7 and Figure 8 show how the Publish functions branch into Docker container provisioning.
Once you select that the destination for the application will
be a Docker Container, select either an existing Docker Container Host or provision a new one. Figure 9 shows the dialog that helps provision a new Docker Container Host using
Windows Server 2016 Technical Preview with Containers.
Looking at the Output window, you can see that under
the covers, Visual Studio is merely calling PowerShell
commands for you. The Output is rather voluminous, but
Figure 10 shows some of it toward the end of the process.
On subsequent publish steps, you choose the existing
Docker Container Host, build the image, and run the new
container that includes the application and the environment. Figure 11 and Figure 12 show this process.
Although Visual Studio has a nice friendly user interface
to guide you through the steps, including creating the
certificates for a secured connection and creating the
Dockerfile for the target OS, its calling the Docker CLI,
which can be called directly using the commands echoed
in the output window.

codemag.com

Figure 9: Provision a new Azure VM for Docker Containers.


Thats all it takes for this process. The big shift for Visual
Studio 2015 is that all of this was accomplished by using
the Docker CLI. This cant be discounted. In prior Visual
Studio versions, much of the tooling was built right into
Visual Studio itself, making it difficult or impossible to
reuse the tooling in a build automation environment.
With this new way of integration, every one of these
commands can be taken from this tooling and placed
in a proper build environment. The Visual Studio tooling provides a way to get started quickly, but it also

Visual Studio 2015: Ushering in a New Paradigm

53

Figure 10: Docker is now provisioned in Azure with a container for the application.
Once this process is complete, youll surely be interested
in logging into the Microsoft Azure portal to see the cloud
resources that were provisioned and configured based on
these commands and scripts that were run. Figure 13
shows the portal.
As you can already see, Visual Studio 2015 blurs the lines
between the code and the infrastructure. Its tight integration with Azure and PowerShell cause the boundaries
between the technologies to blur.

Packing More Meaning into the Code

Previous editions of Visual Studio placed a great deal of emphasis on designers and tooling that made it easier to write
code. Visual Studio 2015 is a demonstration that the code
is the center of the universe. Instead of trying to write the
code for you, Visual Studio 2015 tries to help you work with
and understand the code better. This starts with source control. Here in late 2015, Git has become the de facto standard
for modern source control. Figure 14 shows how to use the
GitHub extension to connect to GitHub from within the IDE.
You can then clone the repository from the GitHub repository, as shown in Figure 15.
Figure 11: You can publish an application in a new container on an existing Docker server.
serves as a good demonstration for which commands to
run and in what order. From this starting point, a proper
Continuous Delivery pipeline can be created so that a
development team has a repeatable process for pushing changes to the software system while keeping each
autonomous application component separate from the
others. In this fashion, each can be replaced without
disturbing the others.

54

Visual Studio 2015: Ushering in a New Paradigm

Once you have the repository cloned, and for every new
project, you can use the GitFlow Visual Studio extension
to initialize and use the GitFlow source control workflow
for every feature, release, and hotfix over time. Figure 16
shows how access to the GitFlow capability is right where
it belongs, inside the Team Explorer window.
Beyond modern source control integration, Visual Studio
2015 Professional edition includes the CodeLens feature
that was previously only available in the Enterprise or
Premium editions of Visual Studio. CodeLens has also
been enhanced to provide:

codemag.com

References to the line of code


Passing/failing tests that use the line of code
The last time the line of code has been changed
How many authors have modified a particular line of code
Figure 17 shows the CurrentUser property being investigated through tests, references, and the commit history.

In Figure 17, you see some automated tests in the Test


Explorer. This demonstrates the NUnit Visual Studio extension that allows the seamless integration of NUnit
tests into the Visual Studio test runner and CodeLens.
In addition to these great features, IntelliTrace also
provides performance tips while debugging code. You
can see a few tests that take over 100 milliseconds to

Figure 12: PowerShell scripts copy the application and complete the deployment.

Figure 13: The Azure portal shows the resources created by the Docker provisioning.

codemag.com

Visual Studio 2015: Ushering in a New Paradigm

55

Figure 14: Team Explorer support connecting to modern source control systems

56

run. By debugging into these tests, the performance


tips pop up and tell you exactly how many milliseconds
a line of code took to run. I remember the days of coded
timers and logging what time it was down to the millisecond in order to get this type of insight into hot spots
within the code.

The enabler for this type of functionality is Roslyn. The new


code editor integrates with the new Roslyn compiler and
provides a way for the editor to integrate and participate in
and invoke the compilation process. There are literally too
many small features and enhancement to cover in an article
like this, but the ones covered here are huge milestones.

Figure 15: Selecting the repository to clone is very simple.

Figure 16: GitFlow is available from within Visual Studio.

Visual Studio 2015: Ushering in a New Paradigm

codemag.com

since Windows 95. Its twenty years later, and Win32 is on


its way out for a large class of computers. Phones, tablets,
and next-generation specialized computers have no need
for Win32, the Windows Registry, and all the other things
that come with Win32. The Internet of Things movement is
also bringing robotics and small machines onto networks.
These devices need small application components that run
in small computing form factors, and many times use low
computing resources. Windows RT was a test of the waters

CoreCLR and DNX


Visual Studio 2015 builds on the .NET Framework 4.6, which
might be one of the last versions of the .NET Framework as
we know it because they are transitioning from separate
.NET Frameworks to execution environments. Going back
to .NET 1.0, the .NET Framework has leveraged Win32 for
its capabilities. Windows 10 is a bridge operating system
that supports the Win32 runtime and everything built on it

Listing 1: The Visual Studio configuration used in this article includes some useful extensions
Microsoft Visual Studio Enterprise 2015
Version 14.0.23107.0 D14REL
Microsoft .NET Framework
Version 4.6.00079

GitHub.VisualStudio 1.0
A Visual Studio Extension that brings the GitHub Flow into Visual Studio.

Installed Version: Enterprise

JetBrains ReSharper Ultimate 2015.1.2 Build 102.0.20150721.105606


JetBrains ReSharper Ultimate package for Microsoft Visual Studio.

Visual C# 2015 00322-90150-00747-AA350


Microsoft Visual C# 2015

Microsoft Azure Tools 2.7


Microsoft Azure Tools for Microsoft Visual Studio 2015 - v2.7.30818.1601

Application Insights Tools for Visual Studio Package


Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2015 (Beta6)
ASP.NET and Web Tools 2015 (Beta6)

1.0

14.0.20723.0

ASP.NET Web Frameworks and Tools 2013 5.2.30624.0


For additional information, visit http://www.asp.net/

Microsoft Code Digger


Microsoft Code Digger

0.9

Microsoft.Pex.VisualStudio
Pex

1.0

NuGet Package Manager 3.0.0


NuGet Package Manager in Visual Studio. For more information about NuGet,
visit http://docs.nuget.org/.

Common Azure Tools 1.6


Provides common services for use by Azure Mobile Services and Microsoft
Azure Tools.

SQL Server Data Tools 14.0.50730.0


Microsoft SQL Server Data Tools

GenerateUnitTest 1.0
Generates unit test code for methods in classes under test.

Visual Studio 2015 Tools for Docker - Preview


Visual Studio 2015 Tools for Docker - Preview

GitFlow.VS.Extension 1.0
Visual Studio extension that integrates GitFlow

Visual Studio Tools for Universal Windows Apps

0.6
14.0.23121.00 D14OOB

Listing 2: ASP.NET 5 uses a very different project file than previous versions
{

"commands": {
"web": "Microsoft.AspNet.Hosting --config hosting.ini"
},

"webroot": "wwwroot",
"userSecretsId":
"aspnet5-ClearMeasure.Bootcamp.UI-559290c5. . . ",
"version": "1.0.0-*",
"dependencies": {
"Microsoft.AspNet.Mvc": "6.0.0-beta6",
"Microsoft.AspNet.Mvc.TagHelpers": "6.0.0-beta6",
"Microsoft.AspNet.Authentication.Cookies": "1.0.0-beta6",
"Microsoft.AspNet.Diagnostics": "1.0.0-beta6",
"Microsoft.AspNet.Diagnostics.Entity": "7.0.0-beta6",
"Microsoft.AspNet.Server.IIS": "1.0.0-beta6",
"Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",
"Microsoft.AspNet.StaticFiles": "1.0.0-beta6",
"Microsoft.AspNet.Tooling.Razor": "1.0.0-beta6",
"Microsoft.Framework.Configuration.Abstractions": "1.0.0-beta6",
"Microsoft.Framework.Configuration.Json": "1.0.0-beta6",
"Microsoft.Framework.Configuration.UserSecrets": "1.0.0-beta6",
"Microsoft.Framework.Logging": "1.0.0-beta6",
"Microsoft.Framework.Logging.Console": "1.0.0-beta6",
"Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0-beta6",
"ClearMeasure.Bootcamp.Core": "1.0.0-*",
"ClearMeasure.Bootcamp.Dnx.DependencyInjection": "1.0.0-*"
},

codemag.com

"frameworks": {
"dnx46": { }
},

"exclude": [
"wwwroot",
"node_modules",
"bower_components"
],
"publishExclude": [
"node_modules",
"bower_components",
"**.xproj",
"**.user",
"**.vspscc"
],
"scripts": {
"prepublish": [ "npm install", "bower install", "gulp clean",
"gulp min" ]
}

Visual Studio 2015: Ushering in a New Paradigm

57

for computers that didnt support Win32. Although the


execution of that operating system left something to be
desired, Microsoft is preparing every framework and tool
for future operating systems that are a hard break from the
past twenty years of Windows.

ASP.NET 5 Runs Anywhere

ASP.NET 5 is the next generation of ASP.NET. It runs on


any modern operating system. Although the beta series
are supported in Visual Studio 2015 now, many expect
the RTM of ASP.NET 5 to happen at the beginning of 2016.

Stack

Server

Req/sec

Load Params

Impl

Observations

ASP.NET 4.6

perfsvr

65,383

8 threads, 512 connections

Generic reusable handler,


unused IIS modules removed

CPU is 100%, almost exclusively


in user mode

IIS Static File (kernel cached)

perfsvr

276,727

8 threads, 512 connections

hello.html containing
HelloWorld

CPU is 36%, almost exclusively in


kernel mode

IIS Static File


(non-kernel cached)

perfsvr

231,609

8 threads, 512 connections

hello.html containing
HelloWorld

CPU is 100%, almost exclusively


in user mode

ASP.NET 5 on WebListener
(kernel cached)

perfsvr

264,117

8 threads, 512 connections

Just app.Run()

CPU is 36%, almost exclusively in


kernel mode

ASP.NET 5 on WebListener
(non-kernel cached)

perfsvr

107,315

8 threads, 512 connections

Just app.Run()

CPU is 100%, mostly in user mode

ASP.NET 5 on IIS (Helios)


(non-kernel cached)

perfsvr

109,560

8 threads, 512 connections

Just app.Run()

CPU is 100%, mostly in user mode

NodeJS

perfsvr

96,558

8 threads, 1024 connections

The actual TechEmpower


NodeJS app

CPU is 100%, almost exclusively


in user mode

Scala

perfsvr

204,009

8 threads, 1024 connections

The actual TechEmpower Scala


plain text app

CPU is 68%, mostly in kernel


mode

libuv C#

perfsvr

300,507

12 threads, 1024
connections

Simple TCP server, not real


HTTP yet, load spread across
12 ports (port/thread/CPU)

CPU is 54%, mostly in kernel


mode

Table 3: ASP.NET 5 shows that it has massive performance gains over previous versions.

Figure 17: CodeLens provides a great deal of useful information about your code.

58

Visual Studio 2015: Ushering in a New Paradigm

codemag.com

What the Heck is DevOps?

This new project system and runtime are very, very different from the way Visual Studio has assembled code
for compilation and packaging since Visual Studio .NET
in 2002. Seasoned .NET developers are used to a project producing a .dll file called an Assembly. Although assemblies are still the unit of code at runtime, theyre no
longer the sole unit of packaging at compile time. This
new project system produces Nuget packages from each
project. These Nuget packages are appropriate for deployment because they can contain multiple assembles, which
allows each project to produce assemblies targeted at different environment, all in the same compilation step.

DevOps is a term representing


the convergence of development
and operations. Both the
development and operations
communities like to own the
term, so it has accumulated
multiple meanings. From the
developer perspective, software
development has become more
mature. With the Agile Manifesto,
developers have jettisoned
annual software releases for
processes that allow releasing
software multiple times per day.
To accomplish this, teams use
rigorous suites of automated
tests to ensure that every change
to the software is a solid step
forward rather than an accidental
step backward.

Here in late 2015, Git has


won as the de facto standard
source-control technology
for Internet-connected
development teams.

DevOps, from an operational


perspective, represents the
next step in the maturity of
IT operations for complex
software systems. Over the years,
the maturity of Storage Area
Networks (SAN) and the advent
of virtualization technology have
transformed the daily work of
operations departments from one
of maintaining physical hardware
to one where they expect
reliability from the infrastructure.

If youre working on an existing .NET application, Visual


Studio 2015 is an easy upgrade for your existing code.
Keep an eye on DNX as it comes to RTM in the coming
months. Its still in beta and still changing, but it will be
worth the wait.
ASP.NET 5 also provides support for the ever-changing
world of Web frameworks. It includes tasks runners like
Grunt and Gulp for processing LESS or TypeScript files into
forms suitable for production deployments. ASP.NET 5 also
supports Bower, which is a package manager for JavaScript and CSS libraries as well as other source-deployed
libraries that will come along over time. Listing 2 shows
the full project.json file that outlines the dependencies
and other information important to this Web application.

With maturity increasing in both


departments, each is proud of the
advances made and is ready to
move toward the other to help
the other side improve.
Sophisticated companies, like
Facebook and Spotify, have
enlisted new techniques that
enable development and
operations to merge into a
single work group where those
who develop an application
are also the ones who operate
it. To achieve this end, teams
must cross-train and also adapt
to change as well as demand
adoption of a new normal, where
servers are not something to be
named and cared for, like pets,
but rather are resources to be
numbered and used, like cattle,
to use the analogy of Microsofts
Jeffrey Snover at his famous Ignite
2015 presentation.

Another file that will grab your attention is config.json.


This file is interesting, given the complete absence of web.
config. In ASP.NET 5, configuration files are completely
application-dependent. And, because ASP.NET 5 is about
being operating-system independent, configuration values can be set or overridden using environment variables.
Mac and Linux both support environment variables just
like Windows; therefore, environment variables provide
the perfect cross-platform store for key/value pairs. Using environment variables also allows you to deploy settings to production environments without being tempted
to store these values in the source-control repository of
the software system itself, which is a bad practice.
Figure 18: CSProj files have been replaced with project.
json files.

ASP.NET 5 also has some lofty performance goals. Table


3 shows some preliminary performance metrics from the
ASP.NET team at Microsoft.

And because ASP.NET 5 is an open source framework being released from GitHub, its set up to come into Visual
Studio 2015 without issue.

A very interesting result from these metrics (from https://


github.com/aspnet/benchmarks) is that ASP.NET 5, running in DNX, is more than twice as fast as the older ASP.
NET stack and more than twice as fast as NodeJS.

ASP.NET 5 is very different from ASP.NET MVC 5. It has a


completely new Solution and Project system. Figure 18
shows the Solution Explorer with the ClearMeasureBootcamp Web application that uses ASP.NET 5.

60

Visual Studio 2015: Ushering in a New Paradigm

Universal Windows Apps Leave Win32 Behind

Visual Studio 2015, when running on Windows 10, also


allows you to create Universal Windows Platform applications. These applications can be developed with XAML,

codemag.com

Figure 19: Universal Windows applications are available when running on Windows 10.
JavaScript, or C++, in a very similar fashion to Windows 8
applications. Universal Windows applications are distributed via the Windows Store, but they can also be installed
directly, called Side-Loading, if developer options are
turned on within Windows.
Creating a Universal Windows application is very simple
and can be done through the New Project dialog, as
shown in Figure 19.
If you have created a Windows 8 application, youll feel
right at home in the code, however, you still must adapt
to the new Visual Studio project system for the new CoreCLR runtime. Figure 20 shows a new Windows 10 application in the Solution Explorer.

ASP.NET 5 running in DNX is


more than twice as fast as the
older ASP.NET stack and more
than twice as fast as NodeJS.

Universal Windows applications are a huge topic in and


of themselves and cant possibly be addressed within the
bounds of this article. Rest assured that Visual Studio
2015 fully supports development of these applications
both for the Windows Store and for the Windows Store
for Business.

DNX Internals

DNX, or the .NET Execution Environment, is the CoreCLR


runtime for ASP.NET 5, just as Windows 10 has runtimes
like win10-x64 and win10-x64-aot. DNX is currently in

codemag.com

Figure 20: Universal Windows applications use the new


project system.
beta, with a go-live license that will be available before
the end of 2015 and the RTM in early 2016. DNX, DNVM,
and DNU are the major components of the Web runtime.
DNVM is the version manager necessary because there will
be multiple versions of DNX available for any computer
depending on how many applications run on it. DNU is
important tooling for package management for NuGet going forward.

Visual Studio 2015: Ushering in a New Paradigm

61

Figure 21: DNX and CoreCLR architecture.


DNX is the environment that can bootstrap and execute an
ASP.NET 5 application. It includes the compiler, SDK tools,
and multiple CLR hosts that must be native to the hardware and operating system. All dependencies for applications running in DNX are resolved from Nuget, and DNX has
the raw ability to run an application straight from the full
source code without a prior compilation step, if thats desirable. In addition, DNX can execute commands from your
code if you expose them as configured commands. When

automating your builds, tests, and deployment pipelines,


you can integrate functionality from your application by
exposing it to DNX as command-line-callable.
Its still early for DNX, but in early 2016, expect to start
developing applications using it. Figure 21 shows the architecture diagram from the .NET Foundation demonstrating how the various pieces of the runtime fit together.
This graphic, just like DNX, is released under the Apache
2.0 open-source license, and the original can be found at
https://github.com/aspnet/Home/wiki/DNX-structure.
Its unclear if Microsoft will release any automated code
migration tools for existing applications to move from
ASP.NET and .NET 4.5.1 to DNX and CoreCLR. Many libraries will work without much modification. For example,
NHibernate 4 already supports ASP.NET 5 and DNX, but
NUnit doesnt yet. Each library author will have to evaluate and test under DNX to ensure compatibility.

How to Move to Visual Studio 2015


Figure 22: The expense reporting application is an ASP.NET MVC 5.2 app.

For applications needing to move to .NET 4.6, Visual Studio 2015 provides an easy migration path, excluding the
DNX changes. The folks on the Visual Studio team have
been making it increasingly easier to move from version-

Figure 23: The Onion Architecture dependency structure of the application.

62

Visual Studio 2015: Ushering in a New Paradigm

codemag.com

to-version with less impact on projects out in the real


world. Lets examine the process of moving from Visual
Studio 2013 to Visual Studio 2015.

Every now and then a major


shift comes along that frees us
to approach things in a new
way. The new version of ASP.
NET is that shift, as evidenced
by DNVM, DNU, DNX, and
related tooling in the Visual
Studio product line.
James Chambers, ASP.NET
MVP, @canadianjames

The application Ill upgrade as an example is an expensereporting application. Its very simple, and it includes
many concepts of much larger applications. First, clone
the source code repository found at https://github.com/
ClearMeasureLabs/ClearMeasureBootcamp. Figure 22
shows the application running.
Our expense report application is a little on the light side
feature-wise. It includes elements like the Onion Architecture layer, and other elements that youll find in your
own applications, such as:
Separate projects for separate concerns
A UI Project that stands with only a single reference
(to Core)
An older version of NHibernate
Unit tests
HTTP Handlers
The Bootstrap CSS/JS library
StructureMap for dependency injection
A database migrations library
A custom workflow engine
Figure 23 shows the structure the application, as established
by the project, references within the Visual Studio solution.
When you open the solution in Visual Studio 2015, it will
be in 2013 mode. In the past, theres been an automatic
conversion step that modified some source code, an event
that caused every developer on the team to have to upgrade to Visual Studio 2015 at the same time. In this new
version, no code is changed, and other members on the
team dont have to synchronize to the version of Visual
Studio used. Youll notice a small change in the repository, as shown in Figure 24.

Figure 24: Visual Studio 2015 creates a new applicationhost.config file.


it for shared development settings. If you choose not to
commit it to source control, youll want to add the following line to your .gitignore file:
**/src/.vs/

If you want to mark the solution file for the new Visual
Studio version so that it opens in version 2015 by default,
youll want to update the first few lines of your .sln file
using this version number:
# Visual Studio 14
VisualStudioVersion = 14.0.23107.0

Visual Studio 2015 works very well side-by-side with Visual Studio 2013 and Visual Studio Code. Installing it will
not cause any adverse side effects.

What Open Source Means


for the Industry
Books and music are written
down and have been for
centuries. To learn to write or
compose, you study the works
of the greats. Now that works of
software are written down in the
public square and shared publicly
on a large scale, developers in the
industry can study the works of
the greats. Just like with literature
and music, the development
industry is starting to stand
on the shoulders of giants, a.
process that takes a while, but is
a cumulative effect that will have
profound results in the software
of future generations.

Visual Studio 2015 works


very well side-by-side with
Visual Studio 2013 and
Visual Studio Code.

Wrapping Up
Visual Studio 2015 is both a massive IDE release and one
that sets the stage for a big revision in the CLR that were
all familiar with. In this article, Ive covered several key
concepts, including Microsofts change in philosophy and
how it has become more open and componentized with
developer tooling. I looked at how this release of Visual
Studio supports modern development, including DevOps
topics like Continuous Delivery and Infrastructure as Code
with Docker Container support. I explored ASP.NET 5,
CoreCLR, and the DNX execution environment. Finally, I
showed you how easy it is to bring existing Visual Studio
2013 applications into Visual Studio 2015 without forcing
a cumbersome upgrade process. Im excited about Visual
Studio 2015, and you will be too, because this release
contains something for everyone.



Jeffrey Palermo

This applicationhost.config file controls IIS Express, and


with Visual Studio 2015, the local Web server runs in a
more application-specific way. In previous versions, IIS
Express had global, computer-wide configuration that
was problematic if the Web server settings needed to vary
too much for multiple Web applications.
Visual Studio 2015 also creates a .vs directory to place
the applicationhost.config file. Theres no need to commit
this directory to source control unless you want to use

codemag.com

Visual Studio 2015: Ushering in a New Paradigm

63

ONLINE QUICK ID 1511091

Telerik Kendo UI Outside the Box


Telerik Kendo UI offers a powerful toolset for building mobile and Web applications with HTML5 and JavaScript.
The toolset has over 70 jQuery widgets that are easily customizable according to your needs and requirements. Moreover,
the widgets integrate seamlessly with AngularJS and Twitter Bootstrap. Also, Telerik delivers its widgets in several flavors such as
ASP.NET MVC, PHP, and JSP controls. Throughout this article, the focus is on the native Kendo UI HTML5 widgets.

Bilal Haidar
bhaidar@gmail.com
http://blog.bilalhaidar.com
@bhaidar
Bilal Haidar, a PMP holder
and ASP.NET Insider, works
as a senior software developer at CCC, a multinational
construction company located
in Athens, Greece. Bilal has
been a software developer
for over 10 years, developing
Web applications on top of
VBC, an in-house document
management system. Today,
Bilal builds client-centric Web
applications and services with
HTML5 and JavaScript Web
technologies, using AngularJS
on the front-end, and the ASP.
NET stack on the back-end.
Bilal has published over 60
articles in CODE Magazine and
other online magazines. In
2008, Bilal published his first
book, Professional ASP.NET 3.5
Security, Membership, and Role
Management in C# and VB.NET
from Wrox Press (http://goo.
gl/gMu4AK). You can find
Bilals blog at http://blog.
bilalhaidar.com or contact him
directly at bhaidar@gmail.com.

If youre a Web dev, youve probably noticed the amount


of jQuery toolsets available in the market. Few of them
are complete and offer good technical support (blogs,
documentation, etc.); others are incomplete and are
missing major widgets; yet others offer only a limited set
of widgets. What distinguishes Teleriks Kendo UI is that
it offers almost all of the widgets you might need to develop a Web or mobile app, so theres no need to install
multiple libraries. Also, the technical support they offer
is incredible, ranging from demos, rich documentation,
blogs, forums, interactive dojos, and, of course, submitting tickets (available only for users with a purchased
Telerik Kendo UI license).

Telerik Kendo UI Components


The components offered by Telerik Kendo UI are categorized by their type of usage. For instance, the Data
Management widgets include Grid, ListView, and others.
The Editors widgets include AutoComplete, ComboBox,
DropDownList, DatePicker, and many other categories,
not to mention Charting, Scheduling, Layout, Navigation, etc. One distinguished category is the Framework,
which offers the DataSource component, Drawing API,
MVVM framework (Model View ViewModel), Single Page
Apps, PDF Export, and others. The DataSource plays a
major role in developing Web apps. The Telerik demos
are rich and are available on http://demos.telerik.com/
kendo-ui/.

Telerik Kendo UI offers a


rich toolset of highly usable
HTML5 Web components
to develop a modern
Web application.

In almost every data-oriented Web app, a Grid is needed


to display records stored either in the database or some
other storage medium, and sometimes users edit the records too. The Grid widget offers two editing modes. One
is inline editing and the other is pop-up editing. Figure 1
shows the behavior of the Grid when inline editing mode
is enabled.

When the user clicks the Edit button, the Grid opens a
modal pop-up box so that they can edit the records details.
In the complicated scenarios required by Web apps, the
aforementioned editing methods are not enough. In
fact, youre limited in designing the layout of the app by
having only these two options. For instance, an app Im
working on requires the user to edit a record and some
other related records all in one screen. The form to edit
the records contains more than 30 placeholder controls!
None of the above Grid editing modes is an option for this
kind of traffic. Using the inline mode, you have to keep
scrolling right and left across all of the columns. Also, the
pop-up editing mode isnt an option, given the fact that
the modal box is large, covering almost the whole screen.
This is the right time for you to think outside the box!
Kendo UI offers a rich set of widgets that will seem like a
necessity in almost every Web app that you develop because they make functionality like this easier. In addition
to those widgets, Telerik offers a rich and extensive JavaScript API to all of its components. You gain the upper
hand in making direct JavaScript calls to communicate
and configure those widgets.
With the API in hand, you arent limited anymore to what
the widgets originally offered. You can extend the use of
such widgets to satisfy complex and real-life scenarios.
You can read more about the API here: http://docs.telerik.com/KENDO-UI/introduction.

DataSource, the Brains

Teleriks DataSource is a building block that plays a central role in every Web app built with Kendo UI widgets.
The DataSource serves as an abstraction layer for data
and is used as a data source for several widgets, such
as Grid, DropDownList, ListView, and others. Every widget needs a DataSource to display and render a set of
records. Not only is a DataSource used as a storage medium for widgets, its also used as a read-write source
to add new records, edit existing ones, and even delete
them. For instance, the Grid widget has a tight coupling
with the DataSource. When the user adds a new row on
the Grid, the Grid internally adds the new record onto the
DataSource its configured to work with. Also, when the
user edits or deletes a row, the DataSource is updated to
reflect the changes accordingly. Bottom line: The Grid and
DataSource are in sync!
The DataSource provides:

When the user clicks the Edit button, the Grid puts the
record into edit mode and at the same time, displays two
buttons: Update and Cancel.
The Grid offers a second mode of editing, pop-up editing,
as shown in Figure 2.

64

Telerik Kendo UI Outside the Box

Data storage for local data (coming out of a JavaScript array)


Data storage for remote data (received by the Web
Service as JSON, JSONP, OData or XML)
Storage for the data internally in a structured man-

codemag.com

Figure 1: The Grid widget with inline editing.


ner with a dictated schema describing the record
and its fields data types
Reading data remotely from a Web Service, provided
that the DataSource is configured with all the needed information to handle such a task
Sending all new records, and edited or deleted records to the server to save the changes, provided
that the DataSource is configured properly with all
URLs, etc.
The capability of calculating aggregates, sorting,
filtering on both the client-side and the server-side
A rich filtering API to work around the data stored
(locally and remotely)
For an overview on Telerik Kendo UI DataSource, see
http://docs.telerik.com/kendo-ui/framework/datasource/overview.
Combing the Grid widget, DataSource and DataSource
API, gives you the upper hand in developing complicated
Web applications.

DataSource is a
major building block
for offline and single
page applications

Getting Started

the Grid widget offers two methods for editing/creating/


deleting data records: inline and pop-up modes. For the
sake of this demonstration, the Grid is responsible for displaying all the records stored and the user enlists a separate Form to edit an existing record or create a new record.
The Grid widget is bound to a DataSource component.
The DataSource is configured to create/update records
through direct communication with a Remote Web Service. Youll see shortly that the Web app takes a different
road to load data onto the DataSource, a road thats independent of the DataSource component itself. The technique strives to optimize the performance in loading the
entire data needed by all screen widgets in a single call
to a Remote Web Service. The Service returns the Employee records and any other reference data that the screen
needs to display and use. For instance, the Marital Status
list of records is an example of additional data records
loaded inside a single remote Web service call, side by
side with the Employee records.

The Web App

Figure 3 shows the screen used to manage Employee records (edit/create).


To add a new Employee record, the user clicks the Add new
employee button to display a Form. You use the Form to
fill in the Employee details to save later. When you select
an existing record on the Grid to edit, the Web app uses
the same Form to render and display the Employee record
details. You then edit and save those details accordingly.
Figure 4 shows the Form displayed upon either a Grid
selection or Add new employee button is clicked.

This article uses the Grid widget DataSource and its API
to develop a Web app screen, and shows how to use the
API to get around the widgets preset functionalities and
extend them.

Click on the Cancel button to reset and hide the Form.


Click the Save button to reset, hide, and submit all changes made locally to the server.

The Web app youll build uses the Grid widget to display and
list Employee records. If you recall from the sections above,

Armed by the knowledge of how the user uses the Web app,
lets switch gears and take a walk through the source code.

codemag.com

Telerik Kendo UI Outside the Box

65

Figure 2: The Grid widget with pop-up editing.

Figure 3: The Employee Register Grid lists all employee records.

Walk Through the Source Code

The Web App Visual Studio Solution contains two projects:


a class library holding the business logic classes, Web API
2 classes, other helper classes. the Web project containing the HTML, and client-side resources (images, CSS, and
JavaScript). Figure 5 shows the Web app structure.
For the sake of this demonstration, a single HTML page, the
index.html, plays the role of the Shell page to host all widgets
and components. The source code for the index.html page
can be found in the downloadable source code package.
The HTML page uses the CSS resource files as shown in
the code snippet.
<link href="assets/css/bootstrap/
bootstrap.min.css"
rel="stylesheet" />

66

Telerik Kendo UI Outside the Box

<link href="assets/css/kendo/
kendo.common-bootstrap.min.css"
rel="stylesheet" />
<link href="assets/css/kendo/
kendo.bootstrap.min.css"
rel="stylesheet" />
<link href="assets/css/kendo/reset.css"
rel="stylesheet" />
<link href="assets/css/main.css"
rel="stylesheet" />

The HTML page uses the Telerik Kendo UI Bootstrap


theme. The theme is configured by adding a reference to
the Twitter Bootstrap CSS resource file. After that, you
add a reference to both the kendo.common-bootstrap.
min.css and kendo.bootstrap.min.css CSS resource files
representing the Bootstrap theme files. Telerik recommends adding a reset.css CSS file to reset some HTML

codemag.com

elements so that Kendo UI widgets behave properly when


nested inside the Twitter Bootstrap Grid layout. (See
http://docs.telerik.com/kendo-ui/using-kendo-withtwitter-bootstrap. A complete example of using Telerik
Kendo UI together with Twitter Bootstrap is offered for
free and is available at http://demos.telerik.com/kendoui/bootstrap/.)

$(function () {

The Index.html page adds a few references to some JavaScript files. (The source code for index.html page can
be found in the downloadable source code package on
the CODE Magazine website). This code snippet shows the
JavaScript files.

Telerik offers a set of readymade themes that you can


choose from, including the
Twitter Bootstrap theme.

<script src="assets/js/kendo/jquery.min.js">
</script>
<script src="assets/js/kendo/kendo.all.min.js">
</script>
<script src="assets/js/app.js"></script>

The snippet shows the jQuery file at the beginning of the


Scripts section. Kendo UI Source Code (minified) follows
the jQuery file. Finally, the snippet shows entry for the
app.js JavaScript module. The module stores the Web App
interaction and the logic required for it to function properly. The following section dissects the app.js module and
illustrates how the Web app screen logic is developed.
The source code for the apps.js module can be found
in the downloadable source code package. The code is
embedded inside a jQuery Document Ready function, as
shown in this next code snippet.

// code omitted
});

Following the best practices in writing JavaScript code,


the app.js module uses the self-executing anonymous
function. All of the code that runs inside this function
lives in a Closure, which provides privacy and variable
scoping throughout the lifetime of the application. You
are invited to use this pattern, especially when multiple
JavaScript frameworks are all used at the same time and
in one application.
The app.js module contains several JavaScript functions.
Toward the end, the module calls a few JavaScript functions, as shown in this code snippet.
function
function
function
function

configureDataSources() { }
configureForm() { }
configureGrid() { }
configureEventHandling(e) { }

Figure 4: The Employee Register Form is used to edit and create employee records.

codemag.com

Telerik Kendo UI Outside the Box

67

The configureDataSources() function configures the


Kendo UI DataSources used by the Web app. The function starts by defining the models, the DataSources used
throughout the source code, as shown in Listing 1.
The app.js module uses the internal object configMap to
hold the state of variables that the module uses throughout the code. For now, two variables are defined on the
configMap to hold the models for ReferenceModel and
EmployeeModel. You define a model in Kendo UI by stating the ID of the model (used for uniqueness of records)
and by listing the fields attached to the model itself. Each
field owns a different data type as per its use. To read
more on how to define models, Telerik offers very good
and rich documentation on their website (http://docs.
telerik.com/kendo-ui/api/javascript/data/model).

Configuring the DataSources

Now that the models are defined, you need to define the
DataSource to use in the code. The module uses two such

DataSources: the maritalStatusDs and employeesDs, as


shown in Listing 2.
You configure a DataSource in several different ways,
depending on how the code uses the DataSource. For a
complete reference on configuring a DataSource, Telerik
offers documentation here:http://docs.telerik.com/kendo-ui/api/javascript/data/datasource.
The code configures the DataSource transport.read setting
for both DataSources as a function that accepts the options
parameter. The transport.read function executes when the
DataSource needs to read data from the remote Web Service.
Following a successful request to the server, the transport.
read() function calls the options.success() function, passing to it the raw data read from the Server, which notifies the
DataSource that the request to the server to read data was
successful. In this example, the configMap.maritalStatus JavaScript array holds the Employee records received from the
server. Continue reading to watch the demonstration of how
the app.js module calls on the remote Web Service to return
the data and store it in the aforementioned JavaScript array.
In addition to configuring the transport.read, the code
defines the schema setting and configures it accordingly.
In this case, the code defines the Model object to which
the DataSource parses and formats the raw data. In other
words, the DataSource uses the schema.model to parse
the raw data received to convert it into a collection of
objects. The objects fields then get populated as per the
model setting definition. The DataSource parses a raw
data record into a Model object and assigns a uid field
to the newly created object that uniquely identifies the
record inside the DataSource data collection.

Figure 5: The Visual Studio solution project structure.

The Marital Status DropDownList uses the maritalStatusDs DataSource to populate its elements. Whenever
possible, use DropDownLists for fields to make data entry consistent and controlled. On the other hand, the
Grid widget uses the employeesDs DataSource to populate itself with Employee records. The Web app allows
the user to add new employee records and edit existing
ones. Therefore, the code configures the employeesDs
DataSource with the proper settings to use when saving
new employee records or updating existing records on
the server. For this to work, the code configures both
the transport.create and transport.update settings
to custom JavaScript functions. For the sake of brevity,
the code that configures the transport.update setting
is shown in the app.js module (and the source code for

Listing 1: Configuring the model objects used in the app


configMap.ReferenceModel = kendo.data.Model.define({
id: "Id",
fields: {
Id: { editable: false, defaultValue: 0 },
Code: { type: "string" }
}
});
configMap.EmployeeModel = kendo.data.Model.define({
id: "EmployeeId",
fields: {
EmployeeId: {
editable: false,

68

Telerik Kendo UI Outside the Box

defaultValue: 0
},
FirstName: { type: "string" },
LastName: { type: "string" },
MaritalStatus: { defaultValue: {} },
JobTitle: { type: "string" },
HireDate: { type: "date" },
BirthDate: { type: "date" },
VacationDays: { type: "number" },
SickLeaveDays: { type: "number" },
Salary: { type: "number" }
}
});

codemag.com

codemag.com

Title article

69

Listing 2: Configuring the data sources


configMap.maritalStatusDs = new kendo.data.DataSource({
transport: {
read: function (options) {
options.success(configMap.maritalStatus);
}
},
schema: {
model: configMap.ReferenceModel
}
});

},
create: function (options) {
$.ajax({
cache: false,
type: "POST",
url: "/api/Employees/Save",
contentType: application/json; charset=utf-8,
dataType: "json",
data: JSON.stringify(options.data),
success: function (result) {
console.log("employees saved successfully");
options.success(result);
},
error: function (result) {
options.error(result);
}
});
}

configMap.employeesDs = new kendo.data.DataSource({


transport: {
read: function (options) {
options.success(configMap.employees);
// update cid
$.each(configMap.employeesDs.data(),
function (idx, record) {
if (!record.hasOwnProperty("cid") ||
!record["cid"]) {
record["cid"] = record["uid"];
}
});

},
schema: {
model: configMap.EmployeeModel
}
});

Listing 3: Updating a single employee record


function updateEmployee(uid) {
var model;
if (uid) {
// get current data item
model =
configMap.employeesDs.getByUid(uid);
}
// new record
if (typeof model === "undefined" ||
model === null) {
model = addEmployeeToDataSource();
}

quest, the DataSource updates its internal records for that


specific employee record accordingly, by populating the
EmployeeId field and reflects the changes onto the Grid
widget. The DataSource sets the employee record dirty flag
to a value of false to signal that the employee record is
now considered an old and existing one. Once again, upon
a successful request, the code notifies the DataSource
about the successful request.

Always strive to
optimize the loading of
data from the server.

if (typeof model === "undefined" ||


model === null) return;
model.set("FirstName",
$(configMap.$form)
.find("input[id='FirstName']")
.val());
}

// ...

app.js module can be found in the downloadable source


code package). More or less, the configuration for both
transport.create and transport.update settings are almost the same.
The transport.create() function contains a single jQuery
AJAX call to POST the employee record that the user creates
on the client side to the remote Web service. The server
responds to the create() request by assigning an ID for the
record it creates on the storage medium. In my example,
the DataSource uses the EmployeeModel to parse in the
raw data received from the server, and the employee record
sent to the server gets its EmployeeId property populated
with a valid EmployeeId. As a result, upon a successful re-

70

Telerik Kendo UI Outside the Box

Loading Data from the Server

The rest of configureDataSources() function sends a GET


request to the server to read the data needed by the Web
app, in this case, the Employees and Marital Status data
records. As mentioned earlier, the code uses a technique
to minimize the number of requests sent to the server in
such a way that it loads all of the needed data at once.
This technique varies depending on the context of the application under study; in some applications a portion of
the data might need to be loaded at once at the beginning and the rest later on. In other cases, the screen requires a large bucket of data to load from the server and it
might not be feasible to retrieve them all at once. The option might be to split the requests for better performance.
The following code snippet shows a typical jQuery $.ajax
function call to send a GET request to the server and retrieve the Employees data.
$.ajax({
type: "GET",
cache: false,
url: "/api/Employees/",

codemag.com

contentType: application/json; charset=utf-8,


dataType: "json",
success: onSuccessRead,
error: onErrorRead
});

The next code snippet defines the OnSuccessRead()


functions.
function onSuccessRead(data) {
if (data.Employees) {
configMap.employees =
$.map(data.Employees,
function (record, idx) {
return record;
});
configMap.employeesDs.read();
}
if (data.MaritalStatus) {
configMap.maritalStatus =
$.map(data.MaritalStatus,
function (record, idx) {
return record;
});
configMap.maritalStatusDs.read();
}

$container.find("select[id='MaritalStatus']")
.kendoDropDownList({
dataTextField: "Code",
dataValueField: "Id",
dataSource: configMap.maritalStatusDs,
optionLabel: {
Code: "Select Marital Status",
Id: -1
}
});

The configureGrid() function configures the Grid widget.


It specifies the back-end DataSource that the Grid widget
uses and configures/lists the columns to appear on the
Grid itself.
Finally the configureEventHandling() function hooks in
some Buttons together with their corresponding Event
Handlers. The buttons are: Add new employee, Hide/Show
Grid, Save, and Cancel.
The code handles a Grid row selection, as shown in the
next code snippet. The code is responsible for displaying
a populated Form with the details of the row selected.
function showEmployeeDetails(data) {
if (!data) return;

reset(false); // reset first


setUid(data.uid); // store ui on form

The server responds with content that consists of two


properties: the Employees property and the MaritalStatus property. The code checks to see whether the Employees property exists, and accordingly maps all the
data records included into the configMap.employees
JavaScript array. Right after that, the code calls on the
employeesDs.read() function to execute, which, in
turn, calls on the transport.read() function configured
previously. As a result, the employeesDs DataSource
gets populated with employee records in the form of
EmployeeModel records.
Similarly, the maritalStatusDs DataSource gets populated
with the marital status records received from the server.

// populate form with employee data


populateEmployeeForm(data);
}

Most importantly, the code stores the selected record uid


onto the .data() attribute of the Form. The uid value is
used later to distinguish whether the user is editing an
existing record or saving a new one. Right after that, the
form gets populated with the Employee record details.
Listing 4: Filtering data on the fly
function handleFilterChange(e) {
var value = $(e.target).val();
if (value.length > 0) {
getKendoGrid().dataSource.filter({
logic: "or",
filters: [
{
field: "FirstName",
operator: "contains",
value: value
},
{
field: "LastName",
operator: "contains",
value: value
}
]
});
} else {
getKendoGrid().dataSource.filter({});
}

Make use of the JavaScript


self-executing anonymous
functions to isolate your
code and protect it.

Configuring the Form

The code defines another function, the configureForm()


function. This function initializes some Kendo UI widgets
with proper settings, as shown in the app.js module. (The
source code for the app.js module can be found in the
downloadable source code package.) The section that
configures the Marital Status DropDownList is remarkable
and worth a discussion. The next code snippet shows how
the DropDownList widget gets bound the maritalStatusDs DataSource. The user can now select any Marital Status
record received from the server and is displayed by the
Marital Status DropDownList.

codemag.com

Telerik Kendo UI Outside the Box

71

Saving and Editing Employee Records

The user clicks on the Add new employee button and the
form appears. The user fills in the Employee information (as
you saw back in Figure 4), and clicks on the Save button. The
user uses the same form to edit an existing record. The user
selects a row on the Grid and the form shows and renders
the selected row information. When the user clicks on the
Save button, the code handles saving the Employee record,
as shown in the next code snippet. Once again, the user edits
an existing record or creates a new record from scratch.

onto its corresponding DataSource by rendering a new


row for the newly added record. You manipulate the Grid
widget and its corresponding DataSource with a single
and simple API call to implement the intended behavior.
The function finishes creating/updating the record inside
the employeeDs DataSource and then resets the screen.
Finally, the code calls on the Grids saveChanges() function as shown in the next code snippet.
getKendoGrid().saveChanges();

function handleSaveChanges(e) {
// get uid for currently displayed row
var currentUid = getUid();
// update employee record in DataSource
updateEmployee(currentUid);
reset(true);
// save changes to server using DataSource
getKendoGrid().saveChanges();
}

The function reads in the uid field attached onto the Form
.data() attribute. If the user edits an existing record, the
uid field contains the uid field of the currently edited record. The uid fields value is undefined for a newly added
Employee record. The updateEmployee() function is
then called, passing the value of the uid variable to it as
a parameter, as shown in Listing 3.
The function checks the parameter uid for a valid value. If
it exists, it uses the value of the uid parameter to retrieve
the corresponding Employee record from the employeesDs
DataSource. Otherwise, it initializes a new model and adds
it to the employeesDs DataSource. In both cases, the end
result is a valid Model object, either newly initialized or
populated with an existing Employee record. The form now
binds to the aforementioned model. The addEmployeeToDataSource() function uses the DataSource API to manipulate its data, as shown in the next code snippet.
function makeEmployeeModel() {
var model = new configMap.EmployeeModel({
EmployeeId: 0,
MaritalStatus:
new configMap.ReferenceModel({})
});
return model;
}
function addEmployeeToDataSource() {
var model = makeEmployeeModel();
configMap.employeesDs.add(model);
return model;
}

The addEmployeeToDataSoruce() function calls the


makeEmployeeModel() function. The function returns
a newly initialized Employee record. The code then calls
the employeeDs.add() function to add the newly created
record. As a result, the employeeDs manages the new record and assigns to it a new uid value to uniquely identify it. The Grid widget automatically reflects the changes

72

Telerik Kendo UI Outside the Box

The function getKendoGrid() retrieves the instance of


the Grid widget found on the HTML page. It then calls
on the Grids saveChanges() function. Internally, this
function triggers the bounded DataSource to execute either the transport.create() or transport.update() function. For instance, if the user adds a new record, then
the transport.create() function executes; otherwise the
transport.update() function executes.

Grid Filtering

Besides the many extended and custom functionalities that


can be added, the code implements another feature using the
Kendo UI API that allows the user to filter the Grid widget
data based on keywords. The user can type a few keywords
into a textbox and the Grid gets its row filtered accordingly,
as shown in Figure 3. The figure shows a free-input textbox
that says filter by first or last name. The code in Listing 4
shows how this feature is implemented using simple API calls.
The function starts by reading the text that the user types
in, and then it calls on the Grids DataSource .filter() function and some filtering settings passing to it. In this case,
the code uses a filter to find all records whose FirstName or
LastName field contains the text entered by the user.

Telerik offers a rich API that


gives you the upper hand
in building customized and
advanced solutions using
their Kendo UI Web
components and widgets.

It would take a lot of pages to go through all the code details in the app.js module, which is why the source code
for the app.js module can be found in the downloadable
source code package.

Conclusion
I introduced some of the most usable and needed widgets offered by Telerik Kendo UI. Also, I showed how the API could
be used together with the widgets to offer some advanced
scenarios that are valuable and much more realistic than what
Telerik demos online. I recommend that you go through their
documentation for a much deeper knowledge and understanding for the power of the API and functionality it offers.



Bilal Haidar

codemag.com

CODE COMPILERS
(Continued from 74)
the cultural differences between Japan and the
US simply made their model impractical. Right or
wrong, by 2000, much of the fascination with the
Japanese model had disappeared.)
Theory X offered a very basic (and emotionally infantile) view of motivating employees, but Theory
Y didnt offer much in the way of guidance. Cross
your fingers and pray was not well-received. As a
new manager with a growing number of employees looking to me for guidance and direction, it
was a little disappointing to find so little in the
way of actionable points.
Whats worse, psychology then began to discover
that some of the traditional tools of motivation
were, in fact, demotivating.

The Candle in the Box


As Daniel Pink describes in the book Drive, psychologists conducted a simple experiment: Given
a box of matches, a box of thumbtacks, and a
candle, participants were asked to figure out a
way to attach the candle to the wall. The original
problem was a cognitive performance test, to
see how well participants could figure out how to
make use of all of the resources given to them, but
successive studies made an interesting discovery.
When participants were offered no incentive, they
would spend about twice as long working on the
problem as when they were offered a cash incentive. Thats not a typo: Offering cash reduced the
amount of time participants were willing to work
on the problem before giving up.
Many managers have long understood that offering cash bonuses that translated to a ridiculously
low cash figure relative to an employees effort
(Thanks for putting in that two months of overtime; as a gesture of our appreciation, heres a
$25 gift card) is considered more insult than reward. What we didnt realize is that offering any
sort of cash incentive, regardless of size, actually
demotivates people to work on a problem.
Suddenly, one of the classic tools of motivation,
offering cash bonuses, promotions, and other
cash-like benefits, wasnt nearly as effective as
believed. What was the newly-minted manager to
do?

Motivation: Not an Amount


In 2014, Susan Fowler published Why Motivating
People Doesnt WorkAnd What Does, in which she
addresses this question much more holistically,
starting with the basic question of: What does
motivation even mean?

Back to my opening question. Are you motivated


to read this book? This is simply the wrong question.Instead of asking if you are motivated, I
need to ask a different question to reveal your
reasons for acting.
An important truth emerges when we explore
the nature of motivation. People are always motivated. The question is not if, but why they are
motivated.
In other words, people will always have some kind
of motivation to do the things they do, either
positive or negative. The employee whos cranking out thousands of lines of code every day?
Theyre obviously motivated, sure, but why? What
drives them? More importantly, the employee
who doesnt seem to be getting any code checked
in for days on end? Theyre motivated too, but
theyre obviously motivated to be doing something other than what youre expecting them to
be doing. Why? Susan Fowler says:
One of the primary reasons motivating people
doesnt work is our nave assumption that motivation is something a person has or doesnt have.
This leads to the erroneous conclusion that the
more motivation a person has, the more likely she
will achieve her goals and be successful.As with
friends, it isnt how many friends you have; it is
the quality and type of friendships that matter.
A whole new world around motivation is suddenly
opening up here. Its not about trying to force
some kind of emotional state into your employees; instead, its about understanding what their
motivation is, and how to tap into it.

Summary
Theres more to explore hereits a 200 page
book, after allbut even just the core above
helps begin a long-overdue correction around
how we manage software developers. Ill use the
next column to talk about the six different forms
of motivation (three optimal, three sub-optimal),
but for the moment, if youre struggling with the
basic question of How do I motivate my team to
do better? youre asking the wrong question.
Instead, ask: Why are they motivated to perform as
they do? and be prepared to listen to the answer.



Ted Neward

Nov/Dec 2015
Volume 16 Issue 6
Group Publisher
Markus Egger
Associate Publisher
Rick Strahl
Editor-in-Chief
Rod Paddock
Managing Editor
Ellen Whitney
Content Editor
Melanie Spiller
Writers In This Issue
Jason Bender
Bilal Haidar
Ted Neward
John Petersen
Mike Yeager

Kevin S. Goff
Sahil Malik
Jeffrey Palermo
Paul Sheriff

Technical Reviewers
Markus Egger
Rod Paddock
Art & Layout
King Laurin GmbH
info@raffeiner.bz.it
Production
Franz Wimmer
King Laurin GmbH
39057 St. Michael/Eppan, Italy
Printing
Fry Communications, Inc.
800 West Church Rd.
Mechanicsburg, PA 17055
Advertising Sales
Tammy Ferguson
832-717-4445 ext 026
tammy@codemag.com
Circulation & Distribution
General Circulation: EPS Software Corp.
International Bonded Couriers (IBC)
Newsstand: Ingram Periodicals, Inc.

Media Solutions
Subscriptions
Subscription Manager
Colleen Cade
832-717-4445 ext 028
ccade@codemag.com
US subscriptions are US $29.99 for one year. Subscriptions
outside the US are US $44.99. Payments should be made
in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards accepted.
Bill me option is available only for US subscriptions. Back
issues are available. For subscription information, email
subscriptions@codemag.com
or contact customer service at
832-717-4445 ext 028.
Subscribe online at
www.codemag.com
CODE Developer Magazine
6605 Cypresswood Drive, Ste 300, Spring, Texas 77379
Phone: 832-717-4445
Fax: 832-717-4460

Thinking of motivation as having the energy or


impetus to act fails to convey the essential nature
of human motivation. It does nothing to help you
understand the reasons behind the action.

codemag.com

Managed Coder

73

MANAGED CODER

On Motivation
For an industry that prides itself on its analytical ability and abstract mental processing, we often
dont do a great job applying that mental skill to the most important element of the programmers
tool chestthat is, ourselves. A while back, I wrote a column entitled On Passion. In it, I said:

For a few years now, as Ive interviewed at various companies, whether for full-time position or
as a consultant or contractor, one of the common
questions that repeatedly comes at me is What
are you passionate about? Whats your passion?
Its everywhere, it seems, and its the secret
sauce to success in the profession. Its what defines us. Its what motivates us. If you lack it, you
should quit and take up something else.
This is part of the reason why I think companies
are spending so much effort trying to hire people
with passionit means that theyre self-motivating. It means that theyll be willing to work
through whatever obstacles that arise. It means
that theyll learn whatever they need to learn,
explore whatever they need to explore, master
whatever they need to master to get the job done.
And, more importantly, it means that I, their manager, wont have to figure out how to do the most
difficult task in management: motivate them.

The Silent (Career) Killer


Ask any manager, recent or long-time, regardless of industry, what the most difficult task they
face on a daily basis is, and its very likely (85%,
anecdotally speaking) that the response will be
Motivating my team. Browse through the Management section of the bookstore, and youll find
dozens of titles with motivation or motivating
somewhere in the title or subtitle. Hundreds more
will have it in the description on the back cover or
inside flap. Thousands touch on the subject in a
more indirect fashion.

74

themselves to the higher standard of quality that


so many companies try to establish via metrics.
Yet, for all these words and pages sacrificed to the
subject, motivation remains stubbornly nondeterministictactics that worked with one team fall
flat with another, and ideas that seem to resonate in one organization never even get past a
cursory glance in another. Its the never-ending
nightmare of management: How do I motivate
my people?

Theory
Not surprisingly, theories of how to motivate employees run a wide range across the psychological spectrum. Many people start from Abraham
Maslows Pyramid of Needs, which states that
human beings act to fulfill their needs in a very
specific order: physiological (air, water, food,
sleep), safety (both from physical harm, such
as protection from the weather, and sociological
harm, such as protection from poverty), social
(friends, a sense of connectedness or belonging, receiving love), esteem (feeling important
and respected), and then self-actualization (the
instinctive need to fulfill your full potential). Supposedly, as the theory goes, since a job helps fill
several levels on the Maslow pyramidsafety (job
brings money, protecting against poverty), social
(job brings people around you, creating friends
and connectedness), and esteem (a good job
means you receive respect), early theories of motivation centered pretty strongly around Maslows
pyramid.

Its not too surprising that this is a subject of so


much study, because in so many ways, it represents the difference between a high-performing
team and a substandard one. A team that fails to
measure up to expectations is often assumed to
be the managers fault. Failing to motivate the
team has killed many a managers career.

Much of this Maslow-driven motivational thinking


was driven by the 1960 book, The Human Side of
Enterprise, by Douglas McGregor. He set out two
competing and contrasting theories of motivation, which, in a fit of stunning originality, he
referred to as Theory X and Theory Y, which
we can summarize somewhat colloquially as Bad
Cop and Good Cop.

Or, paradoxically, made one. After all, motivation


solves so many problems at once. Many, if not all,
of the problems that management will face with
a team that isnt performing adequately go away
when they are highly motivated (dare I say, passionate). Theyll train themselves, theyll work
overtime, theyll go the extra mile, and theyll hold

Theory X. Under Theory X, people only work to


satisfy their basic physiological needs. They dislike work and seek to avoid it. They want direction, not responsibility. They want to feel secure
at work, but given their intrinsic desire to avoid it,
they need to be controlled and sometimes threatened with punishment to be coerced to work hard.

Managed Coder

Under this theory of work, it becomes apparent


that the managers job now basically whipsaws
between carrot and stick. The manager offers
incentives to get the employees to work hard
cash bonuses, benefits, pay increases, and so
forthand sometimes puts their job on the line,
by threatening to terminate them, either directly
or indirectly (If we dont ship this project on
time, I dont know if our department can stick
around for much longer).
Its a simple theory, but as McGregor pointed out,
it has a built-in expiration date. As soon as employees feel their physiological and safety needs
being met (they make enough money), theres no
tool left to manage them. And in the face of repeated threats to that safety, most employees will
learn very quickly to build their own cushion, in
the form of savings, or start looking for other safe
places to hide, in the form of a new job somewhere else.
Theory Y. This model is a more nuanced model.
It assumes that employees actually want to work,
and can be self-directing along lines that align
with the companys own interests. Theyll commit if their higher needs (social and esteem) are
met, and actively embrace responsibility. In other
words, employees can be expected to bring imagination and creativity to the job because they are
somehow motivated to do so.
Trying to create this environment became the goal
of a large number of companies, but it was a devilishly difficult goal to achieve; after all, its not
easy to look around the team room and say, OK, I
need all of you to socially accept and respect one
anotheror youre all fired! In some ways, the
only thing management felt it could do was step
away from the more authoritarian Theory X model,
offer up some greater independence (calling it
empowerment or horizontal team structure),
cross their fingers, and pray.
(When the Japanese industrial resurgence of the
1980s showed up, they demonstrated a new model
of employee/employer relations that threw both
Theory X and Theory Y out the window. Sometimes
called Theory Z, it was the subject of much study
before many management gurus concluded that
(Continued on page 73)

codemag.com

You might also like