What a Nightmare!


Ever heard of NightmareJS? Me either until today. Just in time too, I was beginning to pull my hair out using PhantomJS to run a suite of JavaScript unit tests.

The problem with PhantomJS

During my Visual Studio Team Services build I was experiencing quite a few sporadic failures where PhantonJS failed to connect. The result of this was none of my unit tests were ran and my build failed (a good thing since the tests didn’t run). After many hours of trying to figure out what the right combination of Node, Karma, Jasmine, and PhantomJS could be causing this I decided to look for another solution.

The PhantomJS Error of Death (PEoD)

Here is the exact error that we were receiving:

[32m09 03 2017 10:28:54.632:INFO [karma]: Karma v0.13.22 server started at http://localhost:22200/
2017-03-09T16:28:54.6322872Z 09 03 2017 10:28:54.632:INFO [launcher]: Starting browser PhantomJS
2017-03-09T16:28:56.3259069Z 09 03 2017 10:28:56.325:INFO [PhantomJS 2.1.1 (Windows 8 0.0.0)]: Connected on socket -I7m1ca0zjoJo8roAAAA with id 12617843
2017-03-09T16:29:26.3329205Z 09 03 2017 10:29:26.332:WARN [PhantomJS 2.1.1 (Windows 8 0.0.0)]: Disconnected (1 times), because no message in 30000 ms.
2017-03-09T16:29:26.3509918Z [10:29:26] 'test-minified' errored after 32 s
2017-03-09T16:29:26.3509918Z [10:29:26] Error: 1
2017-03-09T16:29:26.3509918Z     at formatError (D:\A1\_work\22\s\Source\WebUI\node_modules\gulp\bin\gulp.js:169:10)
2017-03-09T16:29:26.3509918Z     at Gulp.<anonymous> (D:\A1\_work\22\s\Source\WebUI\node_modules\gulp\bin\gulp.js:195:15)
2017-03-09T16:29:26.3509918Z     at emitOne (events.js:82:20)
2017-03-09T16:29:26.3509918Z     at Gulp.emit (events.js:169:7)
2017-03-09T16:29:26.3509918Z     at Gulp.Orchestrator._emitTaskDone (D:\A1\_work\22\s\Source\WebUI\node_modules\orchestrator\index.js:264:8)
2017-03-09T16:29:26.3509918Z     at D:\A1\_work\22\s\Source\WebUI\node_modules\orchestrator\index.js:275:23
2017-03-09T16:29:26.3509918Z     at finish (D:\A1\_work\22\s\Source\WebUI\node_modules\orchestrator\lib\runTask.js:21:8)
2017-03-09T16:29:26.3509918Z     at cb (D:\A1\_work\22\s\Source\WebUI\node_modules\orchestrator\lib\runTask.js:29:3)
2017-03-09T16:29:26.3509918Z     at removeAllListeners (D:\A1\_work\22\s\Source\WebUI\node_modules\karma\lib\server.js:336:7)
2017-03-09T16:29:26.3509918Z     at Server.<anonymous> (D:\A1\_work\22\s\Source\WebUI\node_modules\karma\lib\server.js:347:9)
2017-03-09T16:29:26.3509918Z     at Server.g (events.js:260:16)
2017-03-09T16:29:26.3509918Z     at emitNone (events.js:72:20)
2017-03-09T16:29:26.3509918Z     at Server.emit (events.js:166:7)
2017-03-09T16:29:26.3509918Z     at emitCloseNT (net.js:1537:8)
2017-03-09T16:29:26.3509918Z     at nextTickCallbackWith1Arg (node.js:431:9)
2017-03-09T16:29:26.3509918Z     at process._tickCallback (node.js:353:17)

Nightmare to the rescue

Nightmare is a very nicely documented and human readable browser API. Under the hood it uses Electon which is suppose to be 2x faster than PhantomJS! I’m sold!

Why did I use it?

I was previously using PhantomJS through karma to run all of our JavaScript unit test but we kept getting sporadic failures where Phantom couldn’t connect.

How to use Nightmare with karma

  1. Install the nightmare launcher

yarn add -D karma-nightmare

  1. Add Nighmare as your karma browser either within your karma config or within your karma API call. Here’s an example using the Karma API directly.
function runKarma(files, reporters, browser, singleRun, done) {
    log('starting karma');

    return new Server({
        port: 9999,
        browsers: ['Nightmare'],        
        files: files,
        singleRun: true,
        action: 'run',
        logLevel: 'info',
        captureTimeout: 30000,
        browserNoActivityTimeout: 30000,
        frameworks: ['jasmine']        
    }, function (err) {
            handleKarmaError(err, done);

Microsoft //Build 2016

Why //Build

I am fortunate enough to work for an employer that cares deeply about keeping up with technology and training their developers to do so. In addition to this my employer is also a Microsoft shop so we already have an interest in the Microsoft stack and what the future has in store for it.

Personally, I’ve never been to //Build or any other MS only conference in the past, but I’ve definitely kept up with the past conferences via the Channel 9 Live Stream. Knowing that this is the one big conference where Microsoft typically unveils their latest and greatest developer news I was all in when approached with the opportunity to attend.

Specific Interest

The team that I currently lead uses the following technologies from Microsoft:
Asp.Net Web API 2
Asp.Net Web Pages
EntityFramework v6
– SQL Server 2012
C# 6

With those tools I was definitely interested in hearing/seeing/learning what Microsoft has in mind for the Asp.Net Core 1.0 and EntityFramework Core 1.0

My Schedule

Day 1

Day 2

Day 3


All in all this was an excellent conference that I would love to attend again if given the opportunity. In particular I see this as an excellent opportunity to be exposed to up coming tech and to begin to build your bank of possible tools for your next projects!

GIT – Command Line Cheat Sheet

This is my personal brain dump of common git commands that I forget from time to time. I will continue to add to this list as I come across commands that I need to lookup.

Checkout with remote tracking

git checkout -t origin/<branch name>

Squash Commits

git rebase -i HEAD~<number of commits to squash>

Find missing commits

Reference logs, or “reflogs”, record when the tips of branches and other references were updated in the local repository. Reflogs are useful in various Git commands, to specify the old value of a reference. For example, HEAD@{2} means “where HEAD used to be two moves ago”, master@{one.week.ago} means “where master used to point to one week ago in this local repository”, and so on. See gitrevisions[7] for more details.

git reflog

Behaviorial Driven Development – Crash Course


Behavioral Driven Development (BDD) all started from a simple blog post by Dan North in 2006. I”ll let you read the post for the detailed description, but the tl;dr is that he was looking for a better way to teach and communicate the practices of Test Driven Development (TDD). So, BDD was formed to bridge the gap between what should be tested and how to better communicate requirement between the business and developers.


The premise of BDD is to simplify the communication between business owners and developers by using examples to explain acceptance criteria. The examples provide clarification about specific criteria that the business owner expects to see from a feature, and in BDD they also provide the “Acceptance Criteria” for the story. In it’s simplest form I usually break the BDD process into the following 3 steps.

  1. 3 Amigo Story Huddle discuss example of what should be built
    • Business Analyst
    • Quality Assurance Engineer
    • Developer
  2. Define each example (aka scenario) using the Gherkin Syntax
    GIVEN I navigate to the insurance policy 1234
    WHEN the policy owner information is shown
    THEN I am able to choose a my coverage level
  3. Automation of the the scenarios


There are many BDD tools available to parse Gherkin scenarios and create test cases out of them. For example my development team is using SpecFlow because our Acceptance Tests are written in .Net. SpecFlow simply parses the Gherkin to generates NUnit tests which can then be ran via the typical NUnit console runner. The beauty of using these tools is that it allows non technical folks to create the acceptance criteria using business (not technical) language.

Tool Links

Learning TypeScript – To “Type” or not to “Type”?


Two years ago I attended a very good talk about using TypeScript and CoffeeScript to generate your JavaScript code. At the time I had barely even written any JavaScript so even though the talk was very interesting and the tools were awesome I decided to wait until I was more comfortable with JavaScript.

Fast forward to now, I have a good year of experience with the MEAN stack which means I have a much better feel for the JS language. I have enough experience that I know I miss the static typing that C# provides me when I’m not working on a MEAN application. In case you were’t aware I’m a .Net Developer by day and a JS developer by night.

The tools

As I went through the TypeScript tutorial on TypeScriptLang.org I noticed there was an npm package for installing the TypeScript compiler (tsc.exe).

The compiler

npm install -g typscript

Definitely Typed d.ts files

One of the most beneficial features of TypeScript is the Type definition files. You can check them out at DefinitelyTyped.org along with a tool to install them very easily from the command line. The tool I’m talking about is called tsd. Stay tuned for a quick start guide to tsd.

The editor

My three favorite editors for JavaScript our Atom, WebStorm, and now Visual Studio Code.The best news is that each one of these tools has excellent TypeScript support. The new comer here is Visual Studio Code, but it is gaining a lot of traction in the developer community. And guess what, VS Code was written in TypeScript.

The test app

So after a few tutorials I decide that I need to start learning TypeScript by writing an Angular App. You can find this very simple app on GitHub. The application includes the following Angular features implemented with TypeScript:

  • Controllers
  • Services
  • Routing

Still to come on this repo is a TypeScript directive and unit tests. And don’t worry I plan to rewrite the API in Node

The verdict

After writing the Todo App, I’ve decided that learning TypeScript was a great idea. The main benefits I saw were with the excellent intellisense when using the Angular libraries and even my own classes. Additionally the support for ES6 classes and modules makes it very nice as well.

Structuring an Express MVC API – Part 2


In the first post we learned a bit about the MVC pattern itself. Now that we have the pattern down lets see how we can structure our application to follow this pattern. Keep in mind there are many ways to structure an MVC application and this is just one way. Also, since our application is relatively simple we will be organizing it by type instead of by feature. In larger projects organizing your files by feature is nice alternative to use, but for simple project it can be overkill.

However, the most important thing to remember is that whatever structure you choose to follow you stick to it consistently.

Application Structure

The application structure is best introduced from the below image:


As you can see the application is nicely laid out into two main folders under the src directory (client and server). Below I will discuss the purpose of each one of the server folders so that you will have a good understanding of how this all fits together.


The server directory is pretty self explanatory. It’s the home to all server side code and it contains the main server side entry point, server.js. This file is very small and it’s purpose is to simply serve as the entry point for our application.


process.env.NODE_ENV = process.env.NODE_ENV || 'development';

var express = require('./config/express');
var mongoose = require('./config/mongoose');

var db = mongoose();
var app = express();

module.exports = app;
console.log('Server started on port http://localhost:3000');

List of folders under the server directory:


This directory is home to all of the application configuration stuff. Two main configuration items that I like to have in this directory are:

  • express.js
  • mongoose.js

The express.js file contains all of the express configuration code.

Here is a snippet from express.js.

var express = require('express');
var morgan = require('morgan');
var compress = require('compression');
var bodyParser = require('body-parser');
var methodOverride = require('method-override');

module.exports = function () {
var app = express();

if (process.env.NODE_ENV === 'development') {
    // log all requests
} else {

    extended: true

// support for PUT and DELETE verbs

app.set('views', './src/server/views');
app.set('view engine', 'ejs');


As you can see we register all the express middleware and initialize each of our routes from this file.


There are two responsibilities of this file:

  • Connect to MongoDB
  • Initialize our Model objects

Here is the file in it’s entirety.

var config = require('./config');
var mongoose = require('mongoose');

module.exports = function () {
var db = mongoose.connect(config.db);

// load models here

return db;

The env directory is where the server configuration will live. At this time we currently have a development and production file which are loaded based upon the process.evn.NODE_ENV variable. As you can see from the config.js we are concatenating the node environment with the require path so that when we require(config) we will have the correct config settings for our currently running environment.

module.exports = require('./env/' + process.env.NODE_ENV + '.js');

Having an environment specific config is a best practice and will help out when deploying our application to a production server.


As I mentioned in the summary section our application structure is organized by type so as you have probably figured out the controllers directory contains all of the controllers. Currently our ToDo application has two controllers: index.controller.js and todo.controller.js. The naming convention that I like use is to add the word controller to the file so that I can re-use both index and todo in other file names but still be able to use IDE navigation shortcuts to clearly see which files I’m navigating.


You can probably guess where this is going by now, but models does just what is says. All models are defined and stored in this directory.

ToDo Model Example:

var mongoose = require('mongoose');
var Schema = mongoose.Schema;

var ToDoSchema = new Schema({
    description: {type: String},
    dueDate: {
        type: Date,
        default: Date.now
    isComplete: {type: Boolean, default: false}

mongoose.model('Todo', ToDoSchema);

In another post I will dive a lot deeper into mongoose which is what we are using to create our Model objects.


Routes are the core of any API and one of the conventions that I typically use is to begin each route with /api. The reason that I do this to easily distinguish the difference between api and non api routes.

In our MVC API we don’t have any logic in the routing files. Instead the routes are in charge of simply delegating to the correct controller functions.


var controller = require('../controllers/todo.controller.js');

module.exports = function (app) {



As you can see on the final line we are calling app.param which will run before any of the other middleware so that by the time our routing code gets ran the :id has already been parsed from the request path and added to the request object as req.todo Once that is complete, all the /api/todos/:id routes will have the req.todo object available.

Here is the magic behind the controller.todoById function:

exports.todoById = function (req, res, next, id) {
    Todo.findOne({_id: id}, function (err, todo) {
        if (err) {
            return next(err);
        } else {
            req.todo = todo;

In the above snippet we are simply using the Mongoose findOne function to find our Todo by _id from mongo. Once successfully found we add the object to the req object and call the next middleware (which happens to be all the routes that require the :id.


The final directory that we have is the views directory and as you can guess this is where the view code will live. What view code you might ask? Well, even in our simple API app we still have one static view for the main index page and this is where that file is held.


<!doctype html>
<html lang="en">
    <meta charset="UTF-8">
    <title>Todo It!</title>
    <h1>It's already <%= now %> better get something done! </h1>
    <div class="todo-container">
            <input type="text" placeholder="What should I get done?"/>
            <input type="submit"/>

In the above code may have noticed the <%= now %>. This is simply the syntax required from our server side view engine EJS.

Final thoughts

This two part series had quite a bit of content that wasn’t covered in too much depth because topics like MongoDB, Mongoose, Express Routing, EJS, etc… could all be several posts in themselves. Hopefully this still gave you an idea for how to structure a simple Express API using MVC.

Last but not least, we didn’t even give AngularJS love with this post, but rest assured we will be getting into a lot of Angular with future posts.

Structuring an Express MVC API – Part 1


You may be thinking “Huh? MVC for an API?” Well yes, we are discussing MVC for an API. This is part one of a mini-series of posts discussing how to setup a simple Express API using the MVC pattern. In this part I will be giving you a brief introduction into the MVC pattern itself as well as a sneak peak at some of the code in the Todo sample app.

Why learn about this?

You see even though you are creating a RESTful API it is still very important to have a well thought out structure for your application. In today’s post I will discuss an excellent way of separating your Express code so that it is nicely organized and easy to extend.

All code from this post is located on my GitHub page.

What is MVC?

As you may have guessed an MVC application consists of three fundamental parts:
– Model
– View
– Controller

The pattern itself is very common and can be applied at both the project level as well as the sub-project level. For example the front-end of a MEAN application is written in AngularJS and AngularJS lends itself to using an MVC pattern just in the client tier alone. Then, when you implement the back-end you also implement this pattern as I will show you.


Models simply represent the data. An example of a model would be any POJO (Plain Old JavaScript Object). Let’s say that we are creating a ToDo application (Nice! We are creating a Todo App). In our application the Model would be a simple Todo object. Since this is a MEAN blog our model is stored in MongoDB then retrieved via Mongoose. Both Mongoose and Mongo will be discussed in future posts.

Here is an example of a Model from our Todo application.

var mongoose = require('mongoose');
var Schema = mongoose.Schema;

var ToDoSchema = new Schema({
    description: String,
    dueDate: {
        type: Date,
        default: Date.now
    isComplete: {type: Boolean, default: false}

mongoose.model('Todo', ToDoSchema);


As I mention we are building an Express API so in this case the view is simply the HTTP response received from our API. Although since we are creating a Todo application we will also have plain html views as well.

So there you have it. The View in MVC conceptually represents the display of our data whether that is HTML, JSON, or something else.

Here is the JSON view of a Todo:

    "__v": 0,
    "description": "Write blog post",
    "_id": "54c269e6f08121ff125cae74",
    "isComplete": false,
    "dueDate": "2015-01-23T15:33:58.078Z"

You can disregard the _id and the __v since these will be discussed in an upcoming MongoDb post.


Controllers are the core of the MVC pattern. You can think of them as the glue between the Model and the View. This is the only layer which has knowledge of the other two layers.

The main purpose of the Controller is to retrieve data from the Model and hand it back over to the View. Additionally if there is any logic that needs to be applied this is the place to do that as well.

Here is a quick snippet of a Controller using Mongoose to retrieve the document data.

var Todo = require('mongoose').model('Todo');
exports.create = function (req, res, next) {
    var todo = new Todo(req.body);
    todo.save(function (err) {
            return next(err);
        } else{

exports.list = function (req, res, next) {
    Todo.find({}, function (err, todos) {
        if (err) {
            return next(err);
        } else {

In the above example the Model is represented by the Todo mongoose object and the view is represented by the JSON sent in the response object.

Up Next

In the next post we will dig into how you can start organizing your application to utilize this pattern.