Using Node.js with Docker

I’ve looked at a few times, but I never had much of a need in my workflow. Now though, I’m more interested – we’ve got a few microservices that need to be running for our main application. They each are run using Node.js, and the “simple” method is to just open a new terminal window for each, starting them up manually. Of course, that’s something I forget to do until the main app’s tests show failures.

Docker Last week, after watching Scott Hanselman‘s Computer Stuff They Didn’t Teach You video #8: Containers? So What? Docker 101 Explained, I got to thinking about better ways to have those Node servers running.

The first thing I did was to put together a very simple Node Node.js logoapplication, using Express to serve static html files. To follow along, install Node.js, then open Powershell in Windows Terminal or your favorite console, and create a basic server app:

md node-docker
cd node-docker
node init
npm install express --save

This creates a simple Node application, which I then edited to serve files from a specific directory. In this case, I created a \content folder at the root of my drive and put a simple index.html file there.

var express = require('express');
var app = express();
var port = process.env.PORT || 5000;
var path = require('path');
var options = {
index: "index.html",
extensions: ['htm', 'html']
};

app.use('/', express.static('/content', options));

var server = app.listen(port, function () {
var host = server.address().address;
console.log('listening at http://%s:%s', host, port);
});

We can then start the little server using the command node server.js. Opening a browser to http://localhost:5000/ displays the index file from the content directory. That’s done via line 10, which basically maps the root directory of the webserver (‘/’) to the \content directory, and serves files from there. Control-C in the terminal will stop the server.

Next, install Docker. We’ll need to create a Dockerfile file containing the instructions on how to build an image.

FROM node:12
WORKDIR \temp\dockerwork
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node","server.js"]

Docker’s build command will use that to build an image:

docker build --tag nodedock .

You can use docker images to see a list of all the images you have on your system, at this point you should see at least your newly-build nodedock app and an image with Node version 12 itself.

docker's IMAGES command

If you were to start up that image now, the app would run but it would have no data – if you watched the video I mentioned above and look at the Dockerfile, you know that the image only contains the app – not the \content directory. I created it this way on purpose, because I want to have a server app running but I want to be able to create data (html files) ouside the app. This means we need give our container (running image) access to our hard disk.

docker run -p 3000:5000 -v /c/content:/content --name nodedock --rm nodedock

This command
* runs our nodedock application,
* maps it’s port (-p) 5000 to docker’s port 3000,
* creates a volume (-v) that points to our c:\content directory and lets our Node app see it as /content
* and removes the container when we stop it running (--rm).

Docker Desktop

Now with that image running in our container, we can open a browser to http://localhost:3000/. We can open the Docker Desktop application to see all our containers, look at logs and other stats on each, and stop them when we don’t need them. If we don’t use the -rm flag, they’ll stick around.

Code behind this blog post can be found in this github repo.

When Chrome Auto-Updates

Using Selenium Webdriver or similar frameworks to “drive” a browser, usually for UI or end-to-end test automation, you may occasionally get this message:

session not created: This version of ChromeDriver only supports Chrome version 83 (SessionNotCreated)
Exception doesn't have a stacktrace

This means that, perhaps without you even realizing it, an update to Chrome has been installed on your computer. It’s time to also update to a new version of chromedriver.

1. use Chrome’s Help / About menu (chrome://settings/help) and look for your currently-installed version number.

2. search for [Chromedriver downloads] (and most likely be pointed to chromedriver.chromium.org/downloads); download the version appropriate for your computer.

3. put it in the right place, replacing all existing copies of chromedriver.exe. Where exactly? Open a Powershell window and enter ls -path c:\ -include chromedriver.* -file -recurse -ErrorAction SilentlyContinue

grep, in PowerShell

Here is a sample of a PowerShell script that I use for finding text in files, having “grown up” with a more Unix-like syntax. I know, this isn’t exactly a clone of grep‘s functionality, but it gets me closer than having to remember exactly how to wrangle PowerShell’s Select-String commands to my liking. Note that I normally am looking for things recursively, so my script does that automatically.

Param(
  [string]$filename,
  [string]$target
)
ls -r $filename | sls $target

I call that by using an alias, set in my $PROFILE

Set-Alias grep c:\code\ps\Grep.ps1

Then I can just use a command like one of these

c:\> grep \code\*.ps1 version
c:\> grep *.txt hobbits
c:\> grep $HOME alpharetta

Also available, with any changes since this was published, in this repo.

Using datatables.js with Get()

It’s been a while since I’ve posted in the Software Development category, but here’s the solution to something that’s been a thorn in my team’s side for a few days. We’re working with datatables.js, which provides a nice interface to tabular data. We’re using it to Get() data from an API we’re building, and out of habit we were just returning data in a JSON object (btw, if you’re not using Swagger, start doing so now):

"normal" JSON API response

Unfortunately, this just kept showing us a frustrating “No data available in table” message.

After much digging, we realized that datatables really wanted the data in a completely different format:

API Response format for datatables.js

Notice that each data “row” is now basically an array, and there is just one “data” element in the JSON.

Continue reading “Using datatables.js with Get()”

What Time to Test?

I’ve been workng on a new feature with a few other devs, and we’re eager to get it done and into our master branch so that it can be deployed sooner than later. To that end, after dinner and some shopping last night, I picked up my laptop and thought I’d get a bit more work in while my wife was busy on a project of her own.

Since we’ve got multiple people on this branch, and multiple teams across the app, I try to always start off the same way: fetch from our ‘origin’ repo on github, merge in any changes to this branch, merge in any changes to master, then run all the unit tests. Since all our devs subscribe to the same philosophy of fixing failing tests quickly, any problems that come up are usually caused by something new, be it in the branch or someone else’s recent changes to master.

I was surprised, then, to find a failing test in a section of code that wasn’t new — how had that cropped up? I checked our CI server, and it showed the green on previous builds. How did this come to be?

I pointed it out to another dev on my team, as this was a section of code I wasn’t as familiar with, and he quickly realized what the problem was. We were fortunate enough to have found it purely because of timing — if we’d finished eating dinner more quickly, we may not have found it!

calendarIn our applications, all date/time values are stored in UTC. In one place in our backend code, where making a decision which course of action to take based on the number of days overdue a task was, a dev had accidentally used DateTime.Today.Date instead of DateTime.UtcNow.Date. During the day, no problem. In this case, I had happened to run the unit tests after midnight UTC but before midnight local time, and UTC’s “now”; was no longer our “today.” The app was trying to send a “30 days overdue” message when in reality the task was still only 29 days late.

Two lessons. The first, of course, is to have good unit tests that can be run easily, quickly, and often. The second — the one we learned last night — is that it might be a really good idea to be run at various times throughout the day, not just while people are in the office.

Atlanta Code Camp 2013

Atlanta Code CampThis past weekend, over 400 developers, testers and others involved in software development got together on the campus of SPSU at the Atlanta Code Camp for a day of training and networking. Code Camps are community events, volunteer-organized and staffed, with sessions presented for free by local (or not-quite-local) individuals. This was my third year to attend, and this year I was also selected as a speaker; I presented a session entitled Keeping Your Sanity With User Interface Automation.

From a quick hand-raise survey, most of the attendees to my session were developers, as was expected, with a handful of testers and one architect. About ten percent of the room indicated that they had tried some form of User Interface automation, with varying levels of success.

We discussed some of the common reasons for frustration, starting with the brittleness of tests – tests that work one day but fail the next – which is often caused by changes in the structure of pages and a too-specific strategy for locating elements. There is no single “silver bullet” answer; we talked about using various forms of XPath, testers and developers working together to come up with a good strategy for Element IDs, and using the CSS selector.

Dynamic content is something we all have come to know and love as users of the web, but throws some wrinkles into automation and testing. I talked about developing tests that wait for the right event rather than sleeping for a speicific time period.

Lastly, we spent some time talking about data setup and cleanup, the need for non-dependent tests, and the use of test runner and/or tool capabilities.

The code and presentation slides I used are available on github, and I welcome feedback from those who attended the session via surveyMonkey and SpeakerRate.

I’d like to thank all the organizers and volunteers for putting on another great event this year. This was a lot of fun, and I hope to see you at next year’s event.

The First Three Months

Since starting my new job, I’ve often been asked how I like it and what I’m doing. The short answer is that I’m really enjoying it and that this is the greatest company I’ve ever worked for. I’m busy meeting people, talking and writing about testing, and learning more new things each week than I have in a long time. Here’s a quick list of my output (I’m not even going to try to list everything I’ve learned or all the great people I’ve met):

That’s just the first three months. Exciting times, indeed.

Specification By Example (ATDD at AQAA and ATLScrum)

Note: Andrew presented this workshop a second time last night. As with any presentation or workshop, it has evolved slightly over time. I’m incorporating my additional notes into this post. -Sjv 23-May-2013

Over the past week or so, Andrew Fuqua (@andrewmfuqua) has given workshops on Acceptance Test Driven Development for both the Atlanta Quality Assurance Association — an organization that, I’m embarrassed to say, I didn’t know existed until I heard about it via Twitter (of course) — and the Atlanta Scrum Users Group.

Andrew comes to the topic from his role as an Agile Coach and emphasized communication, communication, communication. ATDD, as we discussed this evening, is all about getting “the three amigos” — Product Owner (a role he asked me to fill for the AQAA workshop), Developers and Testers — together to communicate. The intent is to discuss the details of requirements (often written as User Stories these days) and distill them down into a minimum set of examples in order to provide clarity. Another, perhaps better, term would be Specification By Example.

Here are a few of my scribbled notes from the two evenings:

  • Why do software have bugs? Many reasons, but most often because of miscommunication between humans – especially around requirements.
  • We humans have a tendency to assume ill intent (why is this?) where often misunderstanding is more likely the cause.
  • A feature or product request often starts with some sort of concrete example, which gets thrown away as more general requirements documents are written. Let’s get back to examples as part of the requirements
  • Business rules are more likely to be stable than user interfaces. Therefore examples should be in business language.
  • Cognitive dissonance (discussion between people with different viewpoints, different skill sets, different ways of thinking) can facilitate exploration and improve understanding. Involve product owner, developer, tester, business analyst, customer if possible.
  • Don’t get lost in the details.
  • We all have too many meetings. Don’t create another one for this discussion/distillation activity. Hold a specification workshop instead.
  • Discuss, Distill, Develop, Demo (explore). Lather, rinse, repeat.
  • Distill the list of examples down to the bare minimum, a minimal set of both “passing” and “failing” examples. One example per business rule.
  • There is value in the discussion and distillation even if the examples are never codified into automated testing. Communication is the goal.
  • ATDD != TDD. TDD guides design of code. ATDD guides clarification of requirements.
  • ATDD is done before — and throughout — coding/testing.
  • Best captured in some sort of living document (no specific tool recommended, but a wiki was mentioned)
  • The results should be owned by the product owner, not developers or testers.
  • It’s not about testing, it’s about communication.
  • “Don’t invest in something that nobody gets value from.” – Claire
  • “Design a level of testing that is commensurate with risk tolerance. Don’t dabble in automation. Do it well to keep it – or toss it.” – Sellers
  • Understand Brian Marick‘s Agile Testing Quadrant model. (see this and this) – Alex

And some of the resources Andrew mentioned:

The discussions did also touch on the topic of test automation. Again, no specific tools or technologies were covered or recommended, but a couple of good points were raised. I was especially pleased to see many heads nodding in agreement when Andrew said that “test automation is software development, and should be treated as such.”

This was a good workshop, and I’d like to thank Andrew for his time (and for letting me help where I could be of assistance).

Book Recommendations by a Dozen

I was fortunate to spend two days last week with some very smart people as my company hosted a completely non-company-specific, non-tool-specific, non-technology-speicfic peer conference; twelve people in a room discussing the craft and profession of software testing, what changes we see happening and would like to see, and how we might be able to influence them.

On day two the question was raised: “what are you reading or do you recommend?” The following is the list that was produced. This is completely non-edited, everyone was welcome to post their recommendations and talk a bit about them. I am not endorsing all of these; many I’ve not read and a few I’d not even heard of before. Heck, there wasn’t agreement among all the participants on every book, some resulted in quite a discussion.

Note to the participants – I went from our hand-written notes on the wall; if I’ve mis-read something or found the wrong book, please let me know and I’ll update this.

disclosure: all these links are tagged with my Amazon Affiliate code — if you purchase through these links I’ll get a small percentage (which will undoubtedly go toward more book purchases), and I’ll be able to see what books were purchased (but not by whom; Amazon respects your privacy at least that much).

Enjoy!

Why not sleep?

In my last post, about test automation, I wrote about using sleep : “Bad, bad, bad. Don’t do this.” But why not? Well, the way I was doing it there — until d.exists? — really wasn’t that horrible. What you really want to stay away from, and what I’ve seen people start out with, is sleep with a hard-coded time value. “But I know the app’s going to take a few seconds to be ready,” they say, “so I just put in a 5-second delay.”

Let’s look at — in human terms — what we’re talking about. Imagine for a moment that your extremely strict boss wants to know what colour taxi he’ll be taking to the airport, and sends you outside to look.
Yellow Cab
He knows the car’s due to arrive in the next five minutes, so his instruction is a simple one: “Go outside and wait five minutes with your eyes closed. At the end of that time, open your eyes. Look at the car at the curb and come tell me what colour it is.”

Can you see any problems with that task? I see two right away. First, it’s potentially a waste of your time and his. What are you to do if the taxi arrives before the five-minute mark? Nothing. He expressly told you to wait five minutes, then look at the car. If the driver’s having a good day and shows up early, too bad. You wait uselessly at the curb; he waits impatiently for an answer that you could have delivered earlier.

Secondly, he gave you no instructions on what to do if traffic is bad and the taxi isn’t there on time. You’ll have to go back and deliver a non-answer. Next time, he may decide that five minutes wasn’t enough, so he’ll try giving you a ten-minute wait time.

This is decidedly non-optimal.

“But wait,” you say to your manager, “Why don’t I just wait until I see a taxi, then come and let you know.” That is what you’d say, right? Of course it is. That’s exactly what we want our computers to say, too.

That’s the purpose of until. Give the computer a specific condition, and the flexibility to wait just long enough until that condition becomes true. In my previous Ruby/Watir example, we waited until an element existed, or until one contained a particular text string. Other language/testing frameworks have similar syntax. Using WebDriver, for example, you’d use wait.Until()

Until our computers get a bit more intelligent and understand what we mean rather than what we say (and given the potential state of computer intelligence maybe that’s not a good idea), we need to be explicit in our instructions while making those instructions flexible enough to work well in the real world.