The Worst Week of My Life

In January I had the worst week of my life. My wife and I joked we wanted to start 2016 in February. Within a single week I lost my job and we had a miscarriage. In this post I want to tell you about this traumatic experience, how this massive change has turned out positive and what I learned from it.

Before

I started this year with all the optimism in the world. Throughout the holidays leading up to 2016, I tried to refocus myself. The year started with new habits and influences which improved my outlook on life. I was determined to be more positive and work hard at doing a great job.

This was a stark contrast to the previous several months which were filled with ups and downs.

My time with the Exterminators was a blast. I felt re-energized and excited about software development again. While with the team I found new growth areas and doubled down on existing skills. This left me wanting more.

Returning to my previous team was jarring. When I left I was excited about a new project which would have a huge impact. Incorrectly, I thought I would get to dive into it right away. After I had started prototyping portions of it, the whole project was given to another team.

Instead, I was relegated to ongoing maintenance on a different project. My learning slowed. Shortly afterwards our team split in two. Another project I was excited about went with the other half of the team. I felt like I had been punted to the B team. My heart was not in my work. Focus and quality suffered.

In the months leading up to January I was contemplating a larger change. The essential question was whether I could still have an impact in my job or company. Getting things done was frustrating and I did not deal with this well. Over time my morale eroded further as I considered greener pastures elsewhere.

However, during a company-wide developer conference and the holidays I started to see life differently. In my role I might have plateaued, but I realized D2L as a whole still has many great things to offer. I worked with phenomenal people like Craig and Daryl whom I could still learn from. D2L operates on a large scale and have different challenges which are exciting. Lastly, I fully believe in D2L’s mission of transforming learning.

A picture of my desk with a small lego statue celebrating my 5 years at D2L.
A fantastic gift from the D2L developer conference celebrating my 5 years there.

In January I hit the ground running. I reset my perspective. I was excited again. Each task I worked on received my complete attention. I was crushing it again. Coming to work each day was fantastic. We had great projects lined up and we were making a difference.

Terminus

The company mentioned there might be some changes come later in the month. In the past our team had been completely immune. After all, our team was doing critical work which could dramatically improve how the company operates. I considered myself an important contributing member of team and in my hubris didn’t think anything of it.

On that fateful January 26th, my manager who seemed a little choked up came and got me at my desk. This was not like him. I felt a pang of fear, then it happened. My time at D2L was finished. I was given the paper work, told the terms and then escorted to a conveniently located exit nearby.

I was absolutely stunned. It was difficult to speak. On my way out I bumped into one of my mentors and could barely tell them what happened. It took everything to stop myself from crying.

We had planned our life out around my job. This was one thing I thought I could depend on. I had worked there for over 5.5 years. In the previous few years we bought a house and had our son, Jude. How were we going to afford everything? Even in our wildest contingencies we were not ready for me to lose my job. The news was overwhelming.

Then the second bombshell went off. My wife was expecting our second child. She was having signs things were not right. The day after I was let go we lost the baby. Even months later, I am immediately brought back to the devastation we felt. All we could do was hold each other and cry. As I write this I am crying again as the memories come flooding back.

A commemorative necklace my wife wears for the miscarriage.
A commemorative necklace my wife wears for the miscarriage.

This was the worst week of my life.

Doubt

You are your own worst enemy. I spent too much of the next few months rethinking everything. Was there something I did? Something I didn’t do? Why was I chosen? Not knowing why is perhaps the hardest part.

Even now I doubt myself. Have I been a phony for all these years? Am I a horrible programmer or worse? There is an interesting cognitive bias known as the Dunning-Kruger effect where the truly incompetent cannot possibly know how bad they are. They lack the meta-thinking to understand their own faults. In short, you can be so bad you don’t realize it. Is that me?

I talked a little bit about my self-doubts in my post ‘Are you your code review?’. They have always been with me. This event fanned the flames of those doubts even more.

Leading up to the exit interview I wrote lists of things which I thought might have led to this outcome. Talking to my former boss helped clear up my lists. He highlighted areas for improvement and made me feel like I was not a failure.

Getting past my self-doubt and focusing on moving forward was the first step. This easily becomes a dark line of thinking which only leads to self-destruction.

Acceptance

Our family took the first week to recover and get back to normal. There was no changing what had happened. Daily life was permanently altered. We needed to accept this new reality and move forward.

A very large picture of Josh's face upside down.
Oh Josh, you are so silly.

During those first few days the weirdest thing that I needed to accept was my coworkers were suddenly not there. Waking up every morning I was confronted with the fact that I would not be seeing Travis and Josh today. Many of my coworkers had grown into close friends on whom I rely a great deal. I was always happy to see them again or beat them at foosball. Our relationship would need to change if it was to survive.

Another odd thought experiment was the other choices D2L could have made. Was I chosen instead of other people? Would me staying mean someone else leaving? In the end I felt it was better that I was chosen instead of someone else. Better I go through this than one of my friends. After our life had normalized a bit, I was optimistic we would be okay and didn’t want them to have the same burden.

Although being asked to leave was not great, it did open the door for new opportunities. With our last several projects I had been dipping my toes into different technologies and was really enjoying it. Now I had the chance to make a bigger shift in my career.

Searching

Getting a job in the past was a fairly straight forward process:

  • Write a resume
  • Find jobs
  • Write cover letters
  • Apply
  • Interview
  • Repeat or accept the offer

This time around everything happened at the same time and looked like:

  • Network, find jobs and apply all at once
  • Interview
  • Review and accept offers

During this job hunt I learned how invaluable your connections are in finding new jobs. Over the years I had worked with many different people who had moved on from D2L to other companies. I had great role models at D2L who were willing to be references.

Thanks to my network, finding potential jobs changed dramatically. The first step in my search was to talk to my connections. Right away I was introduced to potential companies. I went for coffee to learn about open positions. Casual meetings turned into interviews.

A graph of my linked in profile getting tonnes of traffic.
Suddenly leaving a company does wonders for your LinkedIn profile.

Finding potential jobs in Kitchener-Waterloo is also extremely easy thanks to services through Communitech. They run an active job board which has hundreds of fantastic postings available. I found many interesting companies with a diverse range of sizes and needs.

A number of recruiters reached out with various opportunities. While I did not use their services, they showed me positions typically outside of my existing network. They helped restore my confidence that I would be able to find a new job and move past my doubts.

Resumes and cover letters are a must. Within the first week I had mine updated and was ready to share it with the world. It slowly got better over time with more and more feedback. Luckily, I had updated it sporadically with different achievements and milestones.

Interviews are intimidating. It had been over 5 years since I last interviewed at all. Co-op terms at university gave me a lot of practice, but without using those skills for years I found I was very rusty. My friends helped me through mock interviews to get me ready. Even still, I bombed a few of them which was tough.

Regardless of how an interview went I always sent a follow up email to:

  • Thank the interviewer
  • Emphasize what went well
  • Address any hiccups
  • Reiterate why I would be a great fit

Due to my background I am less good at algorithms and get caught up on some standard interview questions. I spent a lot of time practicing interview problems and reading up on algorithms. When that wasn’t enough I tried learning Ruby and refreshing my knowledge of JavaScript frameworks/idioms. After coming home from interviews, I implemented the programming questions I was given to double check my solutions.

Success

Things started slowly then picked up pace. At first I just had coffee with former colleagues/mentors. Interviews started to trickle in and then really accelerated. During the last week I had a flurry of final interviews leading up to a difficult choice.

In the final stages I was fortunate to have several offers from great companies. Each presented different opportunities and challenges. I don’t think I could have gone wrong choosing from any of the options. Other interviews were not successful and I will need to try again another time.

Over the weekend I made my decision. Due to the timing I had another week off to relax and decompress before starting work. The end was in sight after the harrowing month which preceded it.

On March 7th I started working at Vidyard full time! Even before the first day I was blown away by the amazing culture. They were so welcoming and helped me get connected right away. On my second day I pushed code to production! I am thrilled to be part of the team.

Lessons

What may have been the worst week of my life at the time, will now be the start of a bigger change in my life. This entire experience has taught me so much and helped me understand what matters most.

At D2L, I poured a lot of time, effort and self-worth into my job, only to see it come to an end. I was slowly learning work should not be my #1 priority. This experience brought that realization painfully to the forefront.

My wife and cute son hugging.
These are the people I care about most.

What really matters is our relationships and loved ones. The quiet moments as a family during the first week will stay with me forever. I cannot thank enough the many friends and family members who supported us through this challenging period. We are so fortunate to have them in our lives.

We needed to accept what had happened to move on. Hopefully some day we will have other children. I found another job. We can learn from our past, but should not dwell on it.

Interviewing is a skill you need to keep practicing. From now on I plan on regularly interviewing and updating my resume so I will not become rusty again. The goal is not to job hop every year. Instead, I want to understand what people are looking for and be able to confidently present myself.

I was due for a change and believe I have found what I needed. While my time at D2L was a great learning experience, I was restless and wanted something else. During several low points while at D2L I had felt like leaving. I was never able to make the choice on my own and now the choice was made for me.

Epilogue

Why write this post now? I am finishing my probation and feeling more comfortable at Vidyard. After the event I was so shaken up I decided to put the blog on hold. Starting a new job I wanted to make sure my first 3 months were solid.

Well, mostly solid. Almost immediately after starting I was sick for a full 3 weeks. In the first month I worked from a bed at home more than I did at the office. Although I was sweating bullets while coughing up lung chunks, my manager was extremely supportive. My coworkers all told me not to worry and how later I could look back and laugh about it. I had their trust to do what I needed to do and the space to get better. The entire experience was yet another testament to why Vidyard is awesome.

Now, I have shipped a few features and am working on something bigger. It has been a blast. I feel like I have learned so much in very little time. Hopefully, in the coming weeks I will be able to share what I have learned with you.


Thanks to my former co-worker Josh for helping me with the grammars. Sorry for turning down your edit about who wins our foosball. Thanks for the many years we spent together at D2L. I miss you canoe buddy.

I would like to thank my lovely wife Angela for helping review this post. I would be lost without you. I love you.

Vanilla JS Tetris - Good Luck, Have Fun

It has been quite the week. Sometimes you need to just relax. Try this simple Tetris clone made with only the best Vanilla JS. Good luck, have fun.

How to Self-Host Nancy Without Locking Your DLLs: Shadow Copying

This is in response to a GitHub issue for Nancy. The user is trying to self-host Nancy without locking their DLLs. One easy way to do this is to create a wrapper program which runs the actual program with shadow copying assemblies. No DLLs are locked and the changes are minimal.

I will first show a simple application which self-hosts Nancy and can serve a request to "/". The program will start listening, wait for a key to be pressed and then exit. This is the code:

using System;
using Nancy;
using Nancy.Hosting.Self;

class Program {
    static void Main( string[] args ) {

        var url = new Uri( "http://localhost:12345" );
        using ( var host = new NancyHost( new DefaultNancyBootstrapper(), url ) ) {
            host.Start();

            Console.WriteLine( "Now listening, have fun!" );

            Console.ReadLine();
        }

    }
}

We will add a simple NancyModule so we can test the application at http://localhost:12345/:

using System;
using Nancy;

public class HelloWorldService : NancyModule {
    public HelloWorldService() {

        Get["/"] = x => {
            return Response.AsText( "Hello World" );
        };

    }
}

This program will work as is. The problem is that it locks any assemblies it references.

To work around this problem, add another executable to wrap the actual implementation. In order to keep the DLLs unlocked, we will run the implementation from a separate AppDomain with Shadow Copying enabled.

AppDomains are a very powerful feature of the Common Language Runtime for isolating code. They can use different security contexts, modify how assemblies are loaded and can be managed independently. There can be multiple AppDomains within a single process and can achieve some of the same isolation benefits as processes.

Using the separate AppDomain allows us to set the ShadowCopyFiles option to "true". This option will cause the assembly loading process to copy each assembly into a different directory and then load them from the new location. The local copies are left unlocked. For more information on Shadow Copying Assemblies refer to MSDN.

The whole solution would look like the diagram below:

The wrapper executable calling the actual program to run it

This is the wrapper program to call the actual executable Implementation.exe:

using System;

class Program {
    static int Main( string[] args ) {

        AppDomainSetup setup = new AppDomainSetup {
            ShadowCopyFiles = "true" // This is key
        };

        var domain = AppDomain.CreateDomain( "Real AppDomain", null, setup );

        // Execute your real application in the new app domain
        int result = domain.ExecuteAssembly(
            "Implementation.exe",
            args
        );

        return result;

    }
}

That is all there is to it. Don’t want your DLLs to be locked? The easy solution is to use another AppDomain with Shadow Copying enabled.

All the code for this blog post can be found in this sample project.

Introduction to Dependency Injection

I had the privilege of mentoring several co-workers in 2015. One of the topics they found confusing was Dependency Injection. We use it everywhere. To them it felt like magic. The code just fits together through mystical Containers. In this post we will break down the powerful concepts surrounding Dependency Injection.

Let’s start with some simple classes:

public class Foo {
    public void Hello( string message ) {
        Console.WriteLine( "Hello {0}", message );
    }
}

public class Bar {
    Foo m_foo;

    public Bar() {
        m_foo = new Foo();
    }

    public void Example() {
        m_foo.Hello( "World" );
    }
}

These are all concrete classes. No fancy dependency magic here. What this code does is very clear. Bar creates a Foo then uses it to print Hello World.

A complete program using this code is also straightforward:

public class Program {
    static void Main() {
        Bar bar = new Bar();

        bar.Example();
    }
}

Create a new Bar then call Example. Hello World!

Dependency Inversion Principle Applied

Let’s spice things up! Instead of creating the Foo in Bar’s constructor we can pass it in. Better yet, we can switch to an interface with all the same methods as Foo.

public interface IFoo {
    void Hello( string message );
}

public class Foo : IFoo {
    public void Hello( string message ) {
        Console.WriteLine( "Hello {0}", message );
    }
}

public class Bar {
    IFoo m_foo;

    public Bar( IFoo foo ) {
        m_foo = foo;
    }

    public void Example() {
        m_foo.Hello( "World" );
    }
}

So what have we gained? Well, the low level Bar class no longer knows anything about the IFoo implementation it is using. That is now up to callers using Bar. We have switched from a concrete implementation to a higher level abstraction. This decouples the code making it easier to maintain.

We have applied the Dependency Inversion Principle1. As defined by Robert Martin it is:

A. High level modules should not depend upon low level modules. Both should depend upon abstractions.

B. Abstractions should not depend upon details. Details should depend upon abstractions.

The abstraction could be anything. Typically it will be an interface, but can also be a base class, delegate or another abstraction. The key is shifting the code from the implementation to a higher level.

What About Dependency Injection?

Another concept closely related to Dependency Inversion is Dependency Injection. In fact, we used it without really knowing it. Where Dependency Inversion was all about the abstractions and layers, Dependency Injection is all about how dependencies are provided.

Don’t worry, it is a really simple idea. Here is the demystified definition by James Shore:

Dependency injection means giving an object its instance variables.

Literally, injecting dependencies into a class. In our previous example, we injected our dependencies using constructor parameters. This is the most common approach, but you can also inject dependencies using properties or specialized methods.

Look look at the program from the last example:

public class Program {
    static void Main() {
        IFoo foo = new Foo();

        // Boom, IFoo injected!
        Bar bar = new Bar( foo );

        bar.Example();
    }
}

We inject the IFoo into the constructor of the Bar. The program can now choose which IFoo to use. This is more flexible than when the choice was buried in the Bar class.

The Benefits

We have inverted our dependencies and injected them into our classes. This is fantastic! Our code is nicely decoupled. We can easily change what is injected for testing or introducing new features.

Implementations are hidden behind abstractions and can be easily replaced. Want an IFoo which writes out to files? No problem. You can change the code to your new FileFoo without ever modifying Bar.

The original Bar is impossible to test in isolation. The direct dependency on Foo forces the two classes to be tested together. Changes to Foo could break the tests for Bar.

By injecting the dependencies, we can use a fake IFoo in tests to do whatever we want. This is a great way to set up specific scenarios and/or avoid external systems (i.e. databases or services).

Dependencies For Everyone!

Creating classes is more challenging when you use Dependency Injection and Dependency Inversion frequently.

We have moved where dependencies are created. This poses a problem for consumers of the original classes. They must both choose what to inject and create all the dependencies. I mean ALL the dependencies.

Your dependencies start to have their own dependencies. While this is not too bad with a few dependencies, once you get into chains of dependencies it gets nasty.

Think about it. One class has a dependency, the dependency has more dependencies and those dependencies have their own dependencies. This is the tip of the iceberg:

SqlConnection connection = new SqlConnection( "connection string" );

FooRepository repository = new FooRepository( connection );

Logger logger = new ConsoleLogger();

BazService controller = new BazService( repository, logger );

If every class repeated setups like this it would be a big problem. Creating anything would be a nightmare. Thankfully there is a better solution, Dependency Injection Containers.

Dependency Injection Containers

With all this dependency madness we need to find a better way to create classes. Don’t worry! There are fantastic libraries to address this problem. They are commonly referred to as Dependency Injection Containers or Containers for short.

Containers contain and seamlessly connect all of your dependencies. Within your application they are used to instantiate dependencies they know about.

Before we get to the real thing I want to walk you through a mental models for Containers.

  • Externally they are like one massive Factory for any type
  • Internally they are like a Dictionary of Factories

Enter the Factories

A Factory is a common creational pattern. They allow you to abstract what is being created and how it is created. Dependency Injection Containers behave like Super Factories which can create any type they know about.

Want an IFoo? Use the FooFactory!

public class FooFactory {
    public IFoo Create() {
        return new ConsoleFoo();
    }
}

internal class ConsoleFoo : IFoo {
    public void Hello( string message ) {
        Console.WriteLine( "Hello {0}", message );
    }
}

The Factory can be used any time an IFoo is needed without any knowledge of which IFoo is created. We could easily update the FooFactory to create a FileFoo.

public class FooFactory {
    public IFoo Create() {
        return new FileFoo();
    }
}

internal class FileFoo : IFoo {
    public void Hello( string message ) {
        File.AppendAllText( "C:\\foo.txt", "Hola " + message );
    }
}

Factories can partially contain the sprawling dependencies. The more dependencies you have the more factories you will need. Factories will need to call other factories to create nested dependencies. This can get complicated when many dependencies are needed. The extra classes and glue code are tedious to maintain.

Prior to using IFoo we need to create one using the factory:

public class Program {
    static void Main() {
        FooFactory factory = new FooFactory();
        IFoo foo = factory.Create();

        Bar bar = new Bar( foo );

        bar.Example();
    }
}

Having to use the factory everywhere is not fun. We can do better.

Poor Man’s Dependency Injection Container

What if we could use a single class to get any dependency we wanted? We could use a Dictionary of Factories! The Dictionary would be keyed on types where the values would define how to create their respective types. We could then create any class the Dictionary knows about.

In this section, we are going to create a really simple class to do just that. Our very own simple Dependency Injection Container.

Question: Why is it called a “Container”? It will contain all our application’s dependencies. When our application starts we will give it all the dependencies we want to create and classes we want to inject the dependencies into.

The Container needs to:

  1. Resolve types our application needs
  2. Register types our application provides

Enough with the words! Onto the code!

public interface ISimpleContainer {
    public T Resolve<T>();

    public void Register<T>( Func<T> factory );
}

Not bad. One method to Register dependencies and another to Resolve them. The methods line up with our “Factory for any type” and “Dictionary of Factories” mental models. We accept Func<T>’s as simple factories for each type. Once all the types have been registered, Resolve behaves like a Factory for any type.

Let’s implement our simple version:

public class SimpleContainer : ISimpleContainer {
    private readonly Dictionary<Type, Delegate> m_registrations =
        new Dictionary<Type, Delegate>();

    public T Resolve<T>() {
        Func<T> factory = (Func<T>)m_registrations[typeof(T)];
        return factory();
    }

    public void Register<T>( Func<T> factory ) {
        m_registrations.Add( typeof(T), factory );
    }
}

Internally we have the “Dictionary of Factories”. Resolve uses the Func<T>’s we registered.

Using the Container is easy. In the next example, we register all the types (lines 5-11) then resolve them (lines 8 and 13). Once we have resolved Bar we can use it normally!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class Program {
    static void Main() {
        SimpleContainer container = new SimpleContainer();

        container.Register<IFoo>( () => new ConsoleFoo() );
        container.Register<Bar>(
            () => {
                IFoo foo = container.Resolve<IFoo>();
                return new Bar( foo );
            }
        );

        Bar bar = container.Resolve<Bar>();

        bar.Example();
    }
}

Cool. We have a Container. I would not use it in a real project. It is awkward to use. The classes need to be wired together manually. We had to tell it exactly how to create a Bar even though it already knew how to create an IFoo.

This is where real Dependency Injection Containers are fantastic. They solve the wiring problem and automatically inject dependencies they know about. We can use the Container like glue to bind everything together.

Abstractions are registered as concrete types. Consumers can resolve and use those abstractions directly. They have no knowledge of the concrete types being injected. Instead they rely on the Container. This preserves the Dependency Inversion principle and helps decouple our code.

Awesome Containers

Thankfully, there are many great open source Dependency Injection Containers. The following three are my favourites. Our team has used them on different projects.

Autofac

Autofac is new, super clean and powerful. The registration API is fun! They also have great support for controlling lifetimes/scoping and cleaning up for you. This would be my first choice when starting a new application.

StructureMap

StructureMap is battle hardened having been the original .NET Dependency Injection Container. The latest version of StructureMap was a massive step forward. The authors incoperated many improvements they learned from 10 years of supporting the project. A great choice you should definitely check out.

TinyIoC via Nancy

Lastly, we use Nancy a lot! For the simpler applications, we exclusively use the built-in TinyIoC. It is simpler than the other Containers and is missing some advanced options. We periodically consider switching to one of the other libraries for these features which we believe would simplify our configuration.

More Out of the Box

These libraries greatly enhance how you register and resolve components. Often these capabilities are connected; features used when registering define how objects are resolved.

All the Containers can wire together classes based on what they need injected. You could register Bar and when it is resolved the Container would automatically inject an IFoo based on what was registered for IFoo.

Here is an example of our application using Autofac:

using System;
using Autofac;

public class Program {
    static void Main() {
        ContainerBuilder builder = new ContainerBuilder();

        builder.RegisterType<ConsoleFoo>().As<IFoo>();
        builder.RegisterType<Bar>().AsSelf();

        IContainer container = builder.Build();

        Bar bar = container.Resolve<Bar>();

        bar.Example();
    }
}

It does the right thing and gives Bar the registered ConsoleFoo.

Many Containers have shortcuts for simple transformations, i.e. from T to Lazy<T>. Containers often support resolving/registering open generic types.

Containers can offer the ability to register sets of dependencies in Modules or Registries. This provides a simple way to group registrations together or split them apart. For example, you could register all database related classes in one module separate from your logging module.

Most Containers provide mechanisms for registering your types based on conventions so you do not need to configure everything by hand. You can register all classes implementing a similar interface name, i.e. Foo would be registered for IFoo. This is cool for people who like conventions over configuration, but can be too much magic other people. We use this approach and only configure classes which violate our simple conventions.

Perhaps the greatest benefit is how they integrate with various frameworks. Containers often have shortcuts to hook into popular web frameworks, like ASP.NET MVC or Nancy. The framework can use the Container to resolve types it needs. We use this to create Controllers and automatically inject their dependencies. This lets you use Dependency Injection while decoupling your code from the Container itself. Everything magically fits together.

The larger our applications become the more benefit we get from using Dependency Injection Containers. We no longer worry about how we are going to wire our classes together. Instead, we can focus on designing our interfaces and classes.

Connecting the Dots

Phew, you made it this far! I hope this helped shed some light on Dependency Injection and the surrounding concepts.

Instead of using concrete classes we switched to higher level dependencies, applying Dependency Inversion. We needed to get those dependencies from somewhere so we used Dependency Injection, via constructor parameters, to inject the dependencies we wanted.

We explored Dependency Injection Containers using these mental models:

  • Externally they are like one massive Factory for any type
  • Internally they are like a Dictionary of Factories

Then we dug into complete Dependency Injection Containers and their extra features.

Have fun decoupling your dependencies!


Further Reading

There is so much more you can read and learn. While writing this post I found these additional resources:

Autofac and StructureMap Documentation

Both these libraries are fantastic and their maintainers have put some serious work into writing comprehensive documentation. They share many recommendations and pitfalls for using their frameworks. The most interesting articles include insights into the decisions they made and why they made them.

Container Guidelines

Dependency Injection Containers impact how you design your application and need to be treated with care. These recommendations will help you avoid problems2.

DIP in the Wild

Real life applications of Dependency Injection in the wild plus a good recap of the concepts.

Inversion of Control Containers and the Dependency Injection pattern

This is a more in-depth explanation of the concepts. The closely related ideas of “Inversion of Control” and “Service Locators” are explained. There is a review of best practices and trade-offs. Some of the best practices may be a little dated, i.e. using a Service Locator instead of Dependency Injection Containers.


Footnotes

1. I could not get this link to work. I found it via this great article explaining how an abstraction is not synonymous with interfaces.

2. For some applications we intentionally call the Container from our tests. We treat the Container configuration as part of our integration tests. I will agree this is not ideal, but it simplifies creating various types and better mimics what our users will run.


Thanks

Thanks to my gracious 2015 mentees for letting me practice this on you.

Thanks again to my co-worker Josh who helped review this article. He had the great recommendation of renaming the “Poor Man’s DI Container section” to “Man with too much time on his hands’ DI Container”. Maybe I need to go write more code.

Rock-Solid PowerShell Projects

We write a lot of PowerShell. We didn’t realize it when we started, but our projects have gotten much bigger. What seemed like a small bit of glue scripting is now the core project. As the projects grew we learnt some lessons in how to keep them maintainable.

Our first release was really simple. It was just enough to meet our project goals. We didn’t think through how the different pieces would fit together. Fast forward 2 years and the older projects felt very thrown together.

When we started a new project I wanted to make it feel like a professional software project. Instead of the good enough we had with the first project, I wanted everything to fit together just right. I want to apply all the best practices we use with any other development. Just being glue was not enough.

This list might seem straightforward and mundane. That is okay! I think these guidelines are essential for any project worth your time to maintain. With a looser language like PowerShell, these conventions helped give our new project even more structure. Ultimately, they led to better code we can all understand.

Consistent Layout

At first, we tossed our files in one big directory and a bunch of messy dot sourced files. This was brutal. Finding anything was impossible.

Now we have all entry scripts at the root of the project, all modules in a lib folder and tests on their own. More folders are added as needed. A consistent layout for files makes it easy to find your way around.

We took this even further. We applied the same basic layout to all the projects and libraries we maintain. Every repository now includes a build file, release notes and a README.md in addition to consistent directories. New developers (or you after 4 months away) can open any project and start contributing right away.

Test Thoroughly

We were naive when we started our first project. What the project did was really simple. As a result, we thought all we needed to do was run the code once through the happy path. As the code grew this was no longer enough.

We now diligently test each module independently using Pester then again with more comprehensive integration tests together. Pester is an amazing testing library for PowerShell. We love Pester. If you have not used it, go check it out now.

Pester has greatly improved our unit testing. Our previous testing only validated large scenarios and missed the edge cases. We now test individual functions in isolation to expose more permutations caused by the lower levels.

Modules for Everything

Initially, any reusable functions were placed in magical library.ps1 scripts, which would be dot sourced in every other script (i.e. . $PsDir\library.ps1 everywhere, boo). Everything was written as a script to either be dot sourced or directly called. This was great when we started but did not scale as the project became more complex.

With newer projects we place everything into independent modules. Each module has a single responsibility, i.e. setting up part of our application. This keeps every module small and focused.

Within each module, we intentionally keep some functions private. This allows us to shrink the module’s surface area without losing functionality. With our included scripts this would not have been possible.

Mandatory Continuous Integration

From the beginning, our projects have had the ability to check them out, run all tests and publish releases. This allowed us to rapidly add functionality while keeping up our basic hygiene. The code remains clean and we can make sure it always meets our basic requirements.

As we continued to improve, most of our projects have added Preflights. This complemented our existing continuous integration. Now we stop problems from ever reaching our master branch.

Our added emphasis on testing at various levels has improved our confidence. Every build/test run covers even more of the application and edge cases.

Summary

We have learnt a lot from maintaining PowerShell projects over the past few years. Our new projects are rock-solid. We try to have the following in every project:

  • Consistent Layout
  • Thorough Testing
  • Single Responsibility Modules
  • Continuous Integration

This has made our projects easier to understand and update.

Have your glue scripts turned into something bigger? Is it time to take it to the next level?