Unbreakable: The Craft of Code

Slides from KCDC

Unbreakable: The Craft of Code from Joe Morgan

JavaScript Generators: A Practical Example

Generators: Who Cares

Generators may be the most misunderstood feature of ES6. Personally, I’ve read at least half a dozen articles purporting to explain them and all I can ever think is “but why would I use them?”

Turns out I’m not alone. No one uses them.

However, that doesn’t mean they don’t matter. It’s possible that soon we will love them and use them all the time. We just need to understand the problem they solve and see how they add value.

A Practical Example

I spend a lot of time reading code. And recently I decided to find someone, anyone, who used generators.

Well, I found it! Khan Academy has a few projects that use generators. Specifically, their project Algebra Tool uses generators in a way that makes a lot of sense.

It’ll take awhile to show what they are doing, but I’ll go ahead and give the take away. Khan Academy, or rather the developers at Khan Academy, are using generators to turn objects and complex data structures into iterables. In other words, they are using generators to treat objects like arrays.

This makes a lot of sense to me. As I’ll show, they have a rather complex data structure, but they keep the complexity hidden behind a simple interface. This gives them the ability to leverage value from the data structure without requiring other parts of the program to understand the structure.

Now a quick caveat: This is not live production production code. It’s code that resulted from a day-long hack-a-thon. However, I wouldn’t be surprised if something similar exists in prodution. And even if it doesn’t, it easily could. So we’ll dive into it as if it were highly polished and live.

Project Design

First, here’s a look at the project.

They are creating an online tool for students to manipulate and work with algebraic expressions.

The expression in this calculator is built with a class called, wait for it, Expression.

Here’s a sample of the Expression Class

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
export default class Expression extends Node {
    constructor(...nodes) {
        super();
        this.type = 'Expression';
        this.children = new List(this, ...nodes);
    }

    toString() {
        return `${this.type}:${this.children.toString()}`;
    }

    toJSON() {
        return {
            ...super.toJSON(),
            children: [...f(this.children).map(child => child.toJSON())],
        };
    }

    clone(uniqueId = false) {
        ...

We won’t go through this code line-by-line. All we need to know is that Expression takes a series of nodes and creates a new instance of a List.

Data Structure

For simplicity, we’ll ignore the implementation details, all we need to know is that List will create a data structure known as a linked list.

When you create an instance of a new instance of Expression it will look similar to this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const e = new Expression({id:1}, {id:2}, {id:3})
e = {
    children: {
        first: {
            id: 1,
            next: {
                id: 2,
                next: {
                    id: 3
                }
            }
        },
        last: {
            id: 3
        }
    }
}

The trick is that every node knows what node is after it. (It also knows the previous node, but we are leaving that out for simplicity).

Now, this is important because a mathematical expression needs to know what symbols are surrounding it. For example, 3 • x + 5 makes sense and is a valid algebraic expression while 3 • + x 5 is not valid and does not make sense.

We can seen, then, why the developers would use this data structure. The problem is that now they are locked into certain language constraints. Since they are using an object, and a deeply nested object at that, they cannot utilize certain methods that they could have on, say, an array.

As a demonstration, let’s return to the toString method on the Expression class.

1
2
3
toString() {
    return `${this.type}:${this.children.toString()}`;
}

Nothing exciting here. Notice, though, that it calls the children.toString method. Since children are an instance of the List class, let’s look at that method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
export default class List extends Node {
...
  toString() {
    let first = true;
    for (let node of this) {
      if (!first) {
        result += ", ";
      } else {
        first = false;
      }
      result += node.id;
    }
    return result;
  }
}

There’s no need to really understand what’s going on. The trick is in this line: for (let node of this).

Sure. It seems normal. The problem is that line should not work. The for…of loop does not work on objects.

Objects and Iterators

Here’s an example of a simple for ... of loop:

1
2
3
4
5
6
7
8
9
10
11
const presentation = [
  'ES6 Patterns in the Wild',
  'Joe Morgan',
]

for(let metadata of presentation) {
  console.log(metadata);
}

// ES 6 Patterns in the Wild
// Joe Morgan

It’s so simple it’s almost not worth exploring. It iterates over each property in the array and logs it.

What happens if we were to try the same loop on an object? Well, it won’t work.

1
2
3
4
5
6
7
8
9
10
const presentation = {
  title: 'ES6 Patterns in the Wild',
  author: 'Joe Morgan',
}

for(let metadata of presentation) {
  console.log(metadata);
}

> TypeError: presentation[Symbol.iterator] is not a function

So what is that mysterious Sybmol.iterator.

Well, according to mdn: The Symbol.iterator well-known symbol specifies the default iterator for an object. Used by for…of.

If that explains seems circular to you. Don’t worry. It is.

Suffice it to say, the Symbol.iterator tells a for loop how to work. It’s defined for you on arrays (and strings and a couple other things), but not on objects.

However, that’s not a problem. We can define our own! And that’s were generators come in.

Symbols and Generators

Ok. Now we get to the good part.

Let’s look at how a generator is used in Khan Academy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
export default class List {
  ...
    *[Symbol.iterator]() {
      let node = this.first;
      while (node != this.last) {
        let current = node;
        node = node.next;
        yield current;
      }
      if (this.last) {
        yield this.last;
      }
    }
}

We’ll step through this more thoroughly in a moment. For now, notice that they are defining their own Symbol.iterator for the list. That’s how they were able to loop through it at all.

Next, notice the * in front of the function name and the yield keyword. That’s a clue that this is a generator.

Now that we’ve established that it is, in fact, a generator, how does a generator work? There are essentially two ways to use a generator.

1) We can step through it incrementally.

2) We loop through it.

By example:

1
2
3
4
5
function*simple() {
    yield: 1;
    yield: 2;
    yield: 3;
}

1) Step through it:

1
2
3
4
5
6
7
8
9
10
11
12
const y = simple();
y.next();
// { value: 1, done: false }

y.next();
// { value: 2, done: false }

y.next();
// { value: 3, done: false }

y.next();
// { value: undefined, done: true }

2) Loop through it:

1
2
3
4
5
6
for(x of simple()) {
  console.log(x);
}
// 1
// 2
// 3

As a bonus. Loops are used by the spread operator, so whenver we define something that can use a for...of loop we also get the spread operator as a bonus.

1
2
3
const z = simple();
[...z];
// [1, 2, 3]

Pretty neat, huh?

Back to our example.

1
2
3
4
5
6
7
8
9
10
11
*[Symbol.iterator]() {
  let node = this.first;
  while (node != this.last) {
    let current = node;
    node = node.next;
    yield current;
  }
  if (this.last) {
    yield this.last;
  }
}

You do not need to define every single yield statement, as long as there is something to yield. The generator will return it.

Looking back at our original data structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
const e = new Expression({id:1}, {id:2}, {id:3})
e = {
    children: {
        first: {
            id: 1,
            next: {
                id: 2,
                next: {
                    id: 3
                }
            }
        },
        last: {
            id: 3
        }
    }
}

The generator will yield the first node and the toString method will grab the id.

1
2
3
4
5
6
7
8
9
{
    id: 1,
    next: {
        id: 2,
        next: {
            id: 3
        }
    }
}

Then it will yield the next node and the toString method will grab the id.

1
2
3
4
5
6
{
    id: 2,
    next: {
        id: 3
    }
}

Next, it will grab the next node which is the same as the last node. At this point the while loop is resolved. Finally, the last node is yielded and the generator is resolved.

1
2
3
{
    id: 3
}

In essence, the generator turned the linked list into an array.

1
2
3
4
const e = new Expression({id:1}, {id:2}, {id:3})

// Kinda sorta
e = [{id:1}, {id:2}, {id:3}]

What’s the point?

Now we can see how the generator worked in this example. But that doesn’t resolve the question of why?

Really, it comes down to crafting quality code. Khan Academy wanted to use a particular data structure, but they didn’t want to burden the rest of the app with knowledge of the data structure. The generator allows them to utilize the advantages of a data structure while allowing other parts of the app to merely assume it is an array. They can do whatever they might want with the data: loop through it, spread it, and so on.

In other words. The code is as complicated as it needs to be while be as simple as it can be.

And isn’t that the point of any language feature or library? It allows us to do the most work with the most elegance.

As for me, I have read many articles about generators, but it wasn’t until I dug into this example that I really could envision a use case.

If you’re curious, no I haven’t used one in production. I wrote one at one point, but ended up taking it out because there was a simpler and clearer way to do the same thing. So that’s the direction I went.

ES6 Patterns in the Wild

Slides from CodeMash.

It was a lot of fun despite the room malfunction. Thanks to all who attended.

Es6 patterns in the wild from Joe Morgan

Testing Apis in Angular 2 With MockBackend

Ok, ok. I know Angular 2 is still fairly new, but working with it has been… an experience. There’s so much information that is out of date or just plain wrong. And that is doubly true of testing. No matter how important every claims testing is to modern development, it always lags behind.

Recent case in point, I was out interviewing for new jobs and after a few places gave me a spiel about how their developers are geniuses (aren’t they always) and how their products are amazing (ditto), I asked what testing suite they used. They paused and said, ‘well we don’t have tests now, but we totally want to do it in the future.’ What?!.

Anyway, testing in Angular still has a long way to go in terms of documentation. I found myself in a situation where I needed to test the results of an api call. The api returned some rather weird information and it needed to be formatted and translated a bit before I wanted to push it into the app. Furthermore, I knew the endpoint was going to change soon which means that there would need to be some refactoring so tests are an absolute necessity.

Alright, so there’s a need. Seems like a pretty common need. I’ll just poke around the documentation a little and find what I need.

Turns out that’s not so easy. There is something call MockBackEnd but it’s marked ‘experimental’ in the documentation and the documentation is confusing at that. I found a few other blog posts but I couldn’t quite get them to work (more of a statement of my ability than that of the authors).

So after some experimentation, I give you my approach. It pulls some methods from the other documents, but I think it is a little more simple.

Project Structure

The project is organized into components. I’ll be pulling data from jsonplaceholder. Specifically, I’ll be pulling users data.

Our component will look like this:

/Users
  users.component.ts
  users.component.spec.ts
  users.service.ts
  users.service.spec.ts
  users.http.service.ts

It’s a pretty standard set up. The only difference is that I moving all the api calls to a separate http service. The reason here is

  1. they are in one clear place if an api endpoint changes
  2. they are a little easier to mock (if I wanted to go that route)
  3. Separation of concerns and all that stuff

Ok. Let’s get rolling.

Code

Here’s a look at the code I want to test. I’ll do the service in this post and a component in a separate post.

First is the http service. This is where only the most basic REST calls are made.

This is the service. It will call the http service and do any other data manipulations that I need.

Finally, here is the initial test. This one is generated by Angular CLI. Essentially, it is injecting the service into the test suite and then later into each assertion. This mirrors how I would inject it into a component.

Now the question is how do I test the response of the api call? The first step is creating a stable set of test data.

Mock Data

Creating mock data is important for a few reasons. First, I need to decouple the test from an actual api call. That would be slow and expensive. So I need something that can return data as if it were an api. Thus, I need data.

Second, I need to make sure that the same data is return every time. It’s hard to write tests against a live api because the data is bound to change and will break tests. Having mock data avoids this issue. The problem, of course, is that if the api changes I will not have the benefit of failing tests, but that is a different issue.

I try to keep all component related material together. I also try to keep tests clean if I can. As a result, I tend to create a separate file just of mock data. In this case, I’ll create users.data.mock.ts.

First Test

Well, really it’s more like the fourth or fifth test, but I’ll ignore all the other ones created by angular cli.

Our first test will be pretty simple. I just need to make sure that getUsers returns the mockdata. There are a couple of tricks, though. Since the http client returns an observable I need to be sure to subscribe to the data before I can make an assertion.

Here’s how a basic test will look.

It will, of course, fail. But it will not fail for good TDD reasons.

Instead you’ll get some ugly message that says something like this: Error: No provider for Http!

It’s not failing because the assertions do not match. It’s failing because not all the dependencies are injected.

At this point, I can keep including insertions, but it’s probably a good time to think about mocks.

Mock Option 1: Mock a Service

Before I get too far, I know. Mocks are bad. Mocks are code smells and so on. However, there are some good reasons to mock things and a service that would contact something outside the application is a pretty good reason.

In that spirit, let’s mock the users.http.service. There’s only one get method, so the mock will be very short.

This seems easy enough. I create an observable and pass that along. Unfortunately, that will not work. The angular http client is doing a lot more than returning an observable. It is also returning a Response object which is itself returning a ResponseOptions object which contains stringified data.

Phew.

So the final mock file will look like this:

That’s not too bad, but it’s clear that the file will grow and grow. And, of course, that’s just another file mucking up the place.

Fortunately, there is a way around this and that is the built in MockBackend class.

Mock Option 2: MockBackend

MockBackend is essentially a built in class to handle all that mocking for us. It also prevents us from needing to make a lot of extra files.

Here’s the script:

Notice the changes. I’ve included many of the things from the previous mock in our test. There include: Response, ResponseOptions, and the MockUsers. I’ve also included the MockBackend to capture the http requests and generate an observable and MockConnnection to capture the request and route it pass it the response I want.

In the providers, instead of overriding UsersHttpService, I am overriding Http.

Finally, in the actual test, I inject MockBackend. Then, I build the Response exactly how I did in the previous example and use MockConnection to capture the request.

Wrapping up

In a funny way, these built in options create more clutter. At the very least, it require more typing. The connection can be moved to a beforeEach and I can reuse it over and over. However, I think the biggest advantage is that it keeps the testing overhead in the test file. It prevents us from needing to create a lot of separate files.

React Animations

Previously, I wrote about React animations for creating a dropdown menu, but after spending more and more time working with them, I decided to explore exactly how animations fit into the React lifecycle.

Two Animations

There are two kinds of component animations in React.

  1. The component is already in the DOM
  2. The component is entering the DOM

The first kind are not that hard to deal with. In the past, I’ve added a CSS class to trigger a transition. There are also React specific libraries like radium.

The second kind of component requires a bit more work.

To understand why, consider the React lifecycle.

React Lifecycle

In React, you create a series of components that become part of a the virtual DOM:

1
2
3
4
5
6
7
8
9
10
11
12
13
"PotatoHead": {
    "head": {
        "peg": null
    },
    "body": {
        "topPeg": "eyes",
        "middlePeg": null,
        "bottomPeg": null
    },
    "bottom": {
        "peg": "shoes"
    }
}

The virtual DOM renders into the DOM as seen in the browser:

Eventually something happens in the app and that triggers an action:

1
dispatch(existentialCrisis());

This changes the virtual DOM:

1
2
3
4
5
6
7
8
9
10
11
12
13
"PotatoHead": {
  "head": {
    "peg": null
  },
  "body": {
    "topPeg": "eyes",
    "middlePeg": null,
    "bottomPeg": "mouth"
  },
  "bottom": {
    "peg": "shoes"
  }
}

Which updates the actual DOM:

The thing that makes React animations difficult is that there is no point in the lifecycle that they fit in.

Think about the lifecycle hooks:

componentWillMount
render
componentDidMount

Where are animations supposed to fit in?

You can’t use the componentWillMount hook because there is no DOM element to animate.

And you can’t use the componentDidMount hook because the element is already there, so you would have to rerender it (causing a potential loop) and you may see a jump as the component gets an orientation change after being added to the DOM.

Solution: More Lifecycle Hooks

The React team recognized this problem and created a higher level component called ReactTransitionGroups that can wrap additional components giving them more lifecycle hooks.

componentWillMount render componentDidMount

Any component that is wrapped within a ReactTransitionGroup component will get a couple new lifecycle hooks. One of the most relevant is componentWillEnter which will be fired as soon as the component is mounted (the same time as componentDidMount).

ReactTransitionGroup ReactTransitionGroup componentWillMount componentDidMount componentWillEnter render

componentWillEnter creates a lifecycle hook that we can use to animate an components we want. All other animations will be blocked until a callback is called.

Chang Wang has an excellent example of how to build an animation with Tween using ReactTransitionGroups. If you need fine grained control or want to use a specific library, than ReactTransitionGroups are the way to go.

If you need something even easier, React made a further abstraction called ReactCSSTransitionGroups that utilize transition groups but allow the developer to use CSS transitions to handle any animations.

ReactCSSTransitionGroups

ReactCSSTransitionGroups work by wrapping components and then adding specific classes to child components for a designated amount of time.

Here’s an example of how you would set it up:

1
2
3
4
5
6
7
8
9
10
11
const ShapeContainer = ({elements}) => (
  <div id = "shapes">
    <ReactCSSTransitionGroup
      transitionName = "shape"
      transitionEnterTimeout={2000}
      transitionLeaveTimeout={2000}
     >
      {elements}
    </ReactCSSTransitionGroup>
  </div>
)

Notice a few things. I gave the transition name of shape and a transition enter timeout of 2000 milliseconds and the same for the transition leave.

This means that a few classes with the base name shape will be added to every child component for 2 seconds before they are automatically removed.

To take advantage of any transitions, we need to define them with CSS:

1
2
3
4
5
6
7
8
.shape-enter {
  transform: scale(0);
}

.shape-enter.shape-enter-active {
  transform: scale(1);
  transition: all 2s ease-in;
}

Rendered code will look like this. The span tag is the ReactCSSTransitionGroup (although you can specify other tags like div or ul).

1
2
3
4
<div id="shapes">
  <span data-reactid="0.1">
  </span>
</div>

Any child componenent that is added will receive that shape-enter class. This sets up the initial styling that will be animated (in this example it is effectively hidden).

1
2
3
4
5
6
7
8
<div id="shapes">
  <span data-reactid="0.1">
      <svg data-reactid=".0.2"
           class="shape-enter">
        <circle data-reactid=".0.2"></circle>
      </svg>
  </span>
</div>

In the next tick, the component will receive the shape-enter-active class which will trigger the CSS transition. In this example, it will scale it up to full size. The timing for the transition should match the timeout on CSS transition group.

1
2
3
4
5
6
7
8
<div id="shapes">
  <span data-reactid="0.1">
      <svg data-reactid=".0.2"
           class="shape-enter shape-enter-active">
        <circle data-reactid=".0.2"></circle>
      </svg>
  </span>
</div>

After the timeout is reached, the classes are removed from the component.

1
2
3
4
5
6
7
8
<div id="shapes">
  <span data-reactid="0.1">
      <svg data-reactid=".0.2"
           class="">
        <circle data-reactid=".0.2"></circle>
      </svg>
  </span>
</div>

Any subsequent children will go through the same process.

Animating a component leaving is even more important. Without transition groups, a component will disappear before anything can happen to it. However, with ReactCSSTransitionGroups the whole process happens in reverse.

Let’s start with the leaving css. We’ll start at full size and shrink to nothing.

1
2
3
4
5
6
7
8
.shape-leave {
  transform: scale(1);
}

.shape-leave.shape-leave-active {
  transform: scale(0);
  transition: all 2s ease-in;
}

A component in the DOM will first receive the shape-leave class:

1
2
3
4
5
6
7
8
<div id="shapes">
  <span data-reactid="0.1">
      <svg data-reactid=".0.2"
           class="shape-leave">
        <circle data-reactid=".0.2"></circle>
      </svg>
  </span>
</div>

After that, it will receive the shape-leave-active class:

1
2
3
4
5
6
7
8
<div id="shapes">
  <span data-reactid="0.1">
      <svg data-reactid=".0.2"
           class="shape-leave shape-leave-active">
        <circle data-reactid=".0.2"></circle>
      </svg>
  </span>
</div>

And when the timeout set on the ReactCSSTransitionGroup component is reached, the element is removed. Note: The element is not removed after the animation is complete, but when the timeout is reached. So if the CSS transition is longer than the timeout it will just disappear.

1
2
3
4
<div id="shapes">
  <span data-reactid="0.1">
  </span>
</div>

And that’s all it takes.

Here’s a full demo you can try:

See the Pen ReactCSSTransitionGroup by Joe Morgan (@jsmapr1) on CodePen.

Building an Npm Package Part 4: Publishing

Well, all the hard work is done it’s time to deploy.

Fortunately, this is one of the easiest parts of npm. Deploying is a simple process, so I will limit myself to summarizing.

Have An Account

Head over to npm and establish a user.

Connect the project to your account with npm login;

Piece of cake. Nothing else to see here.

Update .npmignore

This file is for anything you want to ignore when the file is published.

Recall from the discussion on package.json that you should have a script called prepublish that will do any fancy compiling that needs to be done.

Now that we are compiling all the things down, there is no need to include the uncompiled files. Test files too do not need to be included. These all add extra weight to your package.

So add those to your .npmignore. Here’s a simple one:

1
2
3
test/
src/
.babelrc

Remember npm handles all the dependencies, so ignore node_modules either here or (preferably) in your .gitignore file.

Publishing and Updating

Now all you have to do is run npm publish and your package is live.

Congratulations!

If you’re anything like me, you’ll immediately notice some mistakes. So, you’ll need to be able to update and version.

Make whatever changes you want. The update the version with npm version e.g. npm version 1.0.1. This will change the package.json version number and add a tag to your git repo.

Now all you have to do is run npm publish again and everything is up to date. Very easy.

All Done

Now go forth and make many more commits.

Craftsmanship

About 6 months into my first full-time coding job I was called into my boss’s office. I had been working on a project that was at least a month overdue. I exhausted and suffering constant impostor syndrome. Despite all that, the project was ready to ship and I was happy and feeling like maybe I could do this.

That feeling slipped away very quickly. My boss was sitting with the engineering director and they had a bunch of my code open on the screen. Despite all that he was in a good mood and invited me to sit down.

“Feel pretty good to be finished?” he asked.

I nodded. “We were looking through the commits” he continued. “And it looks like you have a little more to do.”

I’m sure my face dropped at that point. I was worn out and ready to move on.

“Have you looked at the style guide?”

Again I nodded.

“Well, most of your code is not meeting the style requirements.”

He clicked to the first section. “This is indented out for no reason. This variable name doesn’t make any sense.” As he continued he clicked through more code. “You name this function and then never use it. There’s supposed to be space between these parentheses.”

After a few minutes he stopped. “I know you are ready to be finished, but you need to go back and get it right. We all know parts of the code base are better or worse than others, but everything you touch should be better than it was before.”

He paused and finished by saying, “It’s about craftsmanship.”

That was probably the best coding advice I ever got.

Coding is more than control statements and loops

For many beginners, the biggest struggle is getting things to work. I would go further, that struggle never goes away.

Nearly everything is prototyped in some form. Either, I sketch something out in the code itself, or I try an idea in a REPL, or, if I’m being very good, I write a test and then go from there. However, mature coders will go back and rework that prototype until it is something coherent and beautiful. One of the best parts of testing code is that once it is passing you can change it as much as you want and you know whether that getting things to work part is still taken care of.

Probably the most unfortunate part of coding is that poorly crafted code that works, works.

If a carpenter does a poor job of crafting a piece of furniture, that furniture will not work. The joints will snap. The hanging nails will cut someone sitting down. The upholstery will come apart. It won’t work.

Bad code that works will still work. As a result, it takes a step of maturity to reach a level of craftsmanship, to make things that are beautiful.

Now I know that bad code has technical debt and in a sense does not work in that it will be harder to extend or have a difficult time handling change, but that’s not always the case. Bad code can run for years.

The reason this is a problem is that becoming a craftsman is hard work. Very hard work. It’s way harder than learning to make working things, because there are no cues when things are wrong. It’s an intuition, an aesthetic.

Linters can help, but they can only handle very small clear rules. They cannot account for taste. They cannot say a function could be simpler or that a class is poorly constructed.

Building an Intuition

They key to being a craftsman, then, is building that intuition. Unfortunately, I don’t hear many people talking about how to do that. Traditionally, coders become masters of their craft in the same way it’s been done for generations. They work with other masters of their craft that give continual feedback and push them to higher standards.

Another way is to read.

We live in a time where lots of wonderful code is open source. And just as a novelist would spend time studying the works of other novelists, we do not spend nearly enough time reading code by other people.

At this point, I have a budding intuition. When I write some code I may get a feeling like it could be better. I can’t say why or how, but I just know something is off.

When I get that feeling that something is not quite right, I first try to find something similar. Usually this involves some Stack Overflow, but that’s not a perfect solution. Instead, I like to browse through some open source code by people (or projects) I feel are great. Even if I do not find an exact solution, I get a slightly different sense of style and often that is enough to spark an idea.

Dan Abramov is a great one for me. React and Babel source code are good too. Sometimes I do a search in NPM for something that might be similar and then I browse a few of those projects.

Browsing these projects may not solve the specific problem, but high quality code will give you ideas of how to craft a solution. Stack Overflow gives you a set of tools and construction materials, but open source code gives you a plan. It’s like seeing an architect’s blue print.

And that is the best part.

All of this helps the intuition grow because I see more and more code with more and more ideas and I start to learn what good code looks like, so when I do write something that can be much better, it practically had a warning sign attached.

Continual Improvement

Here are recommendations.

Look at everything you write a second time. The first time you are too consumed with function that you miss aesthetics.

Read more. If you find something that you like (a blog post, an NPM package, a tutorial), look up the code for the author. Nearly everyone has something on github. It will give you an better perspective on what’s possible. But be more strategic in what you read. Articles and tutorials are fine, but they are by nature tightly controlled. There’s a lot more insight in projects that grow and evolve in complexity.

Write aesthetically pleasing code. If you can look at a page and it seems simple and clean and neat, you will feel the need to keep that high quality level. If you see code that looks ugly, take some time to refactor. It’s amazing how many bugs are prevented by keeping things neat.

Write code that makes you feel good. I’ll admit sometimes I use a slightly complex one-liner when an more verbose loop would do, but those just don’t feel the same. And when I write something that I enjoy seeing it makes my day better. If it’s too ridiculous, call me out during a code review. For now, I want to write code that makes me smile.

Craftsmanship and Life

Writing well crafted code won’t necessarily make you a genius. I would argue though that it will help you enjoy the process a lot more. And anyone who reads your code will see the extra care.

A good craftsman brings their high standards to all projects big and small.

Take a look at a piece of shaker furniture. It’s not a skyscraper. It wasn’t built to be in a museum (although some ended up there). It was created to serve a purpose. But the woodworkers didn’t care. They only wanted to make the best piece of furniture they could. And their commitment is evident.

Code should be no different.

Building an NPM Package Part 3: Testing Locally

Testing Locally

You have, of course been writing tests the whole time, right? Of course, well even so you need to test it out in another application.

Fortunately, it’s very simple using npm link.

First, in your the project directory, run npm link this will create a global symlink.

Then in your test project, run npm link [package name].

In my case, npm link frontend-gitlab.

You can combine these into one step by going to the test project and using a relative path: npm link ../frontend-gitlab.

The only trick, is that if you need to remember to run the build script every time you make a change because your test project will be looking for those files as if it was being built when added from npm. That sounds simple, but I kept confusing myself by making a change, but not seeing it reflected in the test project.

That’s it. This was simple, but then that’s what’s great about npm.

Building an NPM Package Part 2: Package.json

Anytime I begin to look deeply into an open source project should begin at the package file. The reason I do this is because it gives a starting point for nearly everything and I can easily get a lot of clues about the project from these points.

Most modern web projects have something like a package file. Some languages make this easier than others; the package.json file or composer.json file have scripts and dependencies along with other goodies. Other projects split them out a bit; Ruby has a gemfile, but also a rakefile with various scripts.

Nonetheless, it is a good starting point no matter the language. And fortunately for us, npm is one of the best, I would say the best, out there. It’s a nice time to be a javascript developer.

Package.json

With that in mind, let’s dive in. I’m not going to go into detail about every single field. The documentation is actually very good, so there’s no reason to duplicate effort. Instead, I will point out a few particulars that I enjoy using and enjoy seeing others use.

main

This is the primary entry point for your script.

When a user types:require('your-package') or import yourPackage from your-package the main file is what that import will refernce.

As mentioned previously this is something to consider if you want a single entry point or multiple entry points.

There can be only one main file, but the rest of the structure of the project will determine how other pieces can be imported. For example, if you have other chunks of code in the cool directory, than a user can import by referencing that directory (e.g. import {coolaid} from 'cool-package/cool/superCool'). More on this later.

For most projects this will be something like main.js or index.js.

bin

I haven’t used this on any projects, but this is how you specify a project that should be placed in the PATH. In other words, this is how you create a command line utility (think mocha or babel or, well, anything you run from the command line).

The syntax is simple. For a command line that is the same as the package name, it is simply: bin: './path/to/executable'. You can also specify a name in an object:

1
2
3
"bin": {
  "my": "./path/to/my-package"
}

Again, as an example, here is the bin for mocha

1
2
3
4
"bin": {
  "_mocha": "./bin/_mocha",
  "mocha": "./bin/mocha"
}

If you are curious, if you want to install a file with a command line interface that is installed only in a project, you just have to specify the path to the node_modules bin e.g. node_modules/.bin/mocha.

You can also reference the module in a npm script and the correct path will be inferred.

Speaking of scripts…

script

This is by far my favorite part of npm (other than packaging dependencies).

I’ve signed on to the recent trend of using npm scripts in lieu of gulp or grunt or any of those things.

Why? There are two reasons. The first is that it is much easier to play around with scripts at the command line. That is, it’s much easier to experiment with different flags and options from the command line and then, once it is working correctly, I just shove it into an npm script.

Secondly, as I mentioned at the beginning, a package.json file is a great place to get an idea of what a project does and seeing the scripts is a big part of that.

Here’s one that I particularly like from redux-thunk

1
2
3
4
5
6
7
8
9
10
11
12
"scripts": {
  "clean": "rimraf lib dist es",
  "build": "npm run build:commonjs && npm run build:umd && npm run build:umd:min && npm run build:es",
  "prepublish": "npm run clean && npm run test && npm run build",
  "posttest": "npm run lint",
  "lint": "eslint src test",
  "test": "cross-env BABEL_ENV=commonjs mocha --compilers js:babel-core/register --reporter spec test/*.js",
  "build:commonjs": "cross-env BABEL_ENV=commonjs babel src --out-dir lib",
  "build:es": "cross-env BABEL_ENV=es babel src --out-dir es",
  "build:umd": "cross-env BABEL_ENV=commonjs NODE_ENV=development webpack",
  "build:umd:min": "cross-env BABEL_ENV=commonjs NODE_ENV=production webpack"
},

I like this because I can see exactly how a high-quality package is built and maintained. I can see which testing suite is used. The location of the tests (in this case it would have been obvious, but still). I can see how the build is implemented and so on.

Finally, as a potential contributor, it’s fairly obvious what I need to do to make sure that the integrity of the project is maintained. I don’t need to worry about having gulp globally installed or mocha. It’s all packaged and referenced. I just need to run npm test to test and npm run lint to lint. Super easy.

Ok, enough gushing. There is a a lot you can do with scripts, but here are some of the best use cases:

  • test: Always have tests. This is a reserved script, so you can run it by simply typing npm test
  • lint: Always lint your code. This is not reserved, so you must run npm run lint
  • start: This is another reserved script, add in whatever you need to get a server up and running. If you have a server.js file, you do not need to write the script, but I think you should so it will be clear to future users.
  • prepublish: This is a script that will run when you publish your project. This matters because you do not need to check in transpiled code. For example. This script "prepublish": "npm run build" will run the build script "build": "babel src -d ./" which will compile the es6 code into the root directory. I also could have it compile to a build directory "build": "babel src -d ./build",, but that would make importing from non-main files a little less intuitive.
  • anything: I have a rule that if I type something more than once a day (in regards to a package) it should go in a script since future users will likely need to do it too. For example, I had a recent project that I would deploy to production. Since I’m not planning on publishing this as an npm package, I just added a deploy script, so npm run deploy would update the server. Sure, it’s not a full one CI build (i.e. if tests are breaking, it would still deploy), but it worked in my situation.

Learn to love scripts. Your life will be better.

All the other things

There’s lots, lots, lots more, but those are better explored on a case-by-case basis. However, there are best practices, so if you are planning on publishing, be sure to add:

  • name: Obviously.
  • descript: Obviously.
  • keywords: Make it easier to be discovered
  • bugs: Location for issues (usually on github/project/issues)
  • license: Part of npm init so it’s easy to remember. Lots of options, though.
  • version: I prefere Major.Minor.Patch. Change the first number on breaking changes or major new features. The second on new features, but no breaks. And the third for, well, patches to existing features.

Again, the documentation is great, so peruse it occasionally or before you publish to make sure you have everything.

Putting it together: Making multiple entry points

To return to a goal from the planning stage, how can we use our package.json file to create an npm package with multiple entry points?

It’s actually fairly simple. The trick is to understand that there will need to be a little structure, but it can be simple and clear.

Let’s start with a file structure such as this:

-- src/
    -- index.js
    -- components/
        -- cool.js
        -- awesome.js

In our package.json file set main to be index.js. Then in the build script, set the output to the root of the project.

1
2
3
4
5
  "main": "index.js",
  "scripts": {
    "build": "babel src -d ./",
    "prepublish": "npm run build"
  },

That’s all there is to it. Now when a user can import the default along with other components not in the main file:

1
2
import yourPackage from 'your-package';
import {cool} from 'your-package/components/cool';

You can, of course, link everything from your main file:

1
2
3
4
5
6
7
8
9
//inside index.js
import {cool, rad} from './components/cool';
import {awesome} from './components/awesome';

export {
  cool,
  rad,
  awesome,
}

But that has the downside of growing out of control rather quickly. Personally, I like have nested, separate functions, but it’s a choice every author must make.

So there you have it. The package.json file is great and there’s tons that you can do with it. I know I just barely scratched the surface. As a recommendation, when ever you have some time to kill, look up your favorite npm package on github and take a few minutes to explore the package.json file and the project structure. It will give you many sources for ideas on your own projects.

Using Sinon to Test Document Functions

That Sweet, Sweet Code Coverage

On my most recent React project, I’ve really been trying to get that 100% code coverage. It hasn’t been bad for the most part, but I hit a wall when a few functions needed to query the DOM.

For the most part, I isolated the DOM as much as possible and as a result most functions were very easy to test. Still, I got to the point that the functionality required knowing exactly where elements were on the page.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
export class Details extends React.Component {

  scrollIntoView(info) {
    if(!info) {
      return document.querySelector('.details').scrollIntoView();
    }
    const el = document.getElementById(info.id);
    if(this.isElementInViewport(el)) {
      return false;
    }
    return el.scrollIntoView();
  }

  isElementInViewport(el) {
    const rect = el.getBoundingClientRect();
    return rect.top >= 0 && rect.bottom <= window.innerHeight;
  }

}

Here’s the background. There is a large list of events. When an event is clicked, a sidebar slides out with information. Usually when you click on the event, it is visible, right?

Well, since I’m using react-router, whenever the event details are visible, there is a route to that state. A user can copy the url, there is a case where the sidebar is open, but the event that is related to it is off page which is confusing to visitors. The fix is easy, if the related event (located with the info.id) is not in viewport, then scroll down to it.

Here’s the hard part: How do I unit test that function?

I don’t want to go the PhantomJS route and render the whole page. I want it to be fast and part of the, well, unit tests.

Wait, A Second

After puzzling through it for a couple of weeks, and soliciting suggestions from a few other devs, it finally hit me, document is just an object like everything else. In other words, when we are doing this:

1
document.querySelector('.events');

We are merely calling a method on an object. It seems so painfully obvious now that I don’t know what took so long, but here we are.

Now, if I had told myself that a week ago, I wouldn’t have been too impressed. My problem is getting elements into the DOM, not the query part.

What I didn’t get was that I wasn’t looking at the problem the right way. The problem wasn’t getting elements into the DOM (virtual or otherwise), the problem was getting elements from the DOM.

So, now we have two facts:

  1. document is just an object with methods
  2. To make our tests work, we need document.querySelector to return something predictable

At this point, it was clear that the solution was to mock out the return of document.querySelector

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
describe('<Details> In View', () => {

  it('will scroll to event details if no info', () => {
    const selector = sinon.stub(document, 'querySelector');
    selector.returns({
      getBoundingClientRect: () => {
        return {
          top: -10,
          bottom: 100
        }
      },
      scrollIntoView: () => {
        return 'scroll to selector';
      }
    })
    expect(Details.prototype.scrollIntoView(null)).toEqual('scroll to selector');
    selector.restore();
  })
})

In this test we will just create an object with just enough properties and methods to allow a good test of the method.

We happen to be passing null and so we are hitting the first control structure. Testing the other methods is very simple. We can stub out getElementById to check that situation.

A key point is to restore the standard behavior at the end with stub.restore() otherwise other tests will not be able to mock the same method. The downside to this approach is that if there is an error in that test it will never restore the stub and then any other attempts to mock the same method will also throw an error causing a domino affect where a single error results in 10 more failed tests.

As a final note, I found a lot of value in specifying specific arguments. Doing so looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
describe(('Some Test') => {
    const mockEl = {
      getBoundingClientRect: () => {
        return {
          top: -10,
          bottom: 100
        }
      },
      scrollIntoView: () => {
        return 'scroll to id';
      }
    }

    const mockElInViewport = {
      getBoundingClientRect: () => {
        return {
          top: 0,
          bottom: 10
        }
      },
      scrollIntoView: () => {
        return 'scroll to id';
      }
    }

    let stub;
    before(() => {
        stub = sinon.stub(document, 'getElementById');
        stub.withArgs(10).returns(mockEl);
        stub.withArgs(11).returns(mockElInViewport);
    })

    after(() => {
        stub.restore();
    })
});

Leaving out the actual tests because the point seems clear that you can stub a method and return different results by argument.

Repeat

Once I had a method for stubbing out document queries, all sorts of problems faded away.

I could test how to act if something is visible.

I could test how much spacing needed to be added for cases of inline styles.

I could test how elements hight on the page differ from elements lower on the page.

All in all, another great tool for the toolbox.