grep, in PowerShell

Here is a sample of a PowerShell script that I use for finding text in files, having “grown up” with a more Unix-like syntax. I know, this isn’t exactly a clone of grep‘s functionality, but it gets me closer than having to remember exactly how to wrangle PowerShell’s Select-String commands to my liking. Note that I normally am looking for things recursively, so my script does that automatically.

ls -r $filename | sls $target

I call that by using an alias, set in my $PROFILE

Set-Alias grep c:\code\ps\Grep.ps1

Then I can just use a command like one of these

c:\> grep \code\*.ps1 version
c:\> grep *.txt hobbits
c:\> grep $HOME alpharetta

Also available, with any changes since this was published, in this repo.

Using datatables.js with Get()

It’s been a while since I’ve posted in the Software Development category, but here’s the solution to something that’s been a thorn in my team’s side for a few days. We’re working with datatables.js, which provides a nice interface to tabular data. We’re using it to Get() data from an API we’re building, and out of habit we were just returning data in a JSON object (btw, if you’re not using Swagger, start doing so now):

"normal" JSON API response

Unfortunately, this just kept showing us a frustrating “No data available in table” message.

After much digging, we realized that datatables really wanted the data in a completely different format:

API Response format for datatables.js

Notice that each data “row” is now basically an array, and there is just one “data” element in the JSON.

Continue reading “Using datatables.js with Get()”

What Time to Test?

I’ve been workng on a new feature with a few other devs, and we’re eager to get it done and into our master branch so that it can be deployed sooner than later. To that end, after dinner and some shopping last night, I picked up my laptop and thought I’d get a bit more work in while my wife was busy on a project of her own.

Since we’ve got multiple people on this branch, and multiple teams across the app, I try to always start off the same way: fetch from our ‘origin’ repo on github, merge in any changes to this branch, merge in any changes to master, then run all the unit tests. Since all our devs subscribe to the same philosophy of fixing failing tests quickly, any problems that come up are usually caused by something new, be it in the branch or someone else’s recent changes to master.

I was surprised, then, to find a failing test in a section of code that wasn’t new — how had that cropped up? I checked our CI server, and it showed the green on previous builds. How did this come to be?

I pointed it out to another dev on my team, as this was a section of code I wasn’t as familiar with, and he quickly realized what the problem was. We were fortunate enough to have found it purely because of timing — if we’d finished eating dinner more quickly, we may not have found it!

calendarIn our applications, all date/time values are stored in UTC. In one place in our backend code, where making a decision which course of action to take based on the number of days overdue a task was, a dev had accidentally used DateTime.Today.Date instead of DateTime.UtcNow.Date. During the day, no problem. In this case, I had happened to run the unit tests after midnight UTC but before midnight local time, and UTC’s “now”; was no longer our “today.” The app was trying to send a “30 days overdue” message when in reality the task was still only 29 days late.

Two lessons. The first, of course, is to have good unit tests that can be run easily, quickly, and often. The second — the one we learned last night — is that it might be a really good idea to be run at various times throughout the day, not just while people are in the office.

Atlanta Code Camp 2013

Atlanta Code CampThis past weekend, over 400 developers, testers and others involved in software development got together on the campus of SPSU at the Atlanta Code Camp for a day of training and networking. Code Camps are community events, volunteer-organized and staffed, with sessions presented for free by local (or not-quite-local) individuals. This was my third year to attend, and this year I was also selected as a speaker; I presented a session entitled Keeping Your Sanity With User Interface Automation.

From a quick hand-raise survey, most of the attendees to my session were developers, as was expected, with a handful of testers and one architect. About ten percent of the room indicated that they had tried some form of User Interface automation, with varying levels of success.

We discussed some of the common reasons for frustration, starting with the brittleness of tests – tests that work one day but fail the next – which is often caused by changes in the structure of pages and a too-specific strategy for locating elements. There is no single “silver bullet” answer; we talked about using various forms of XPath, testers and developers working together to come up with a good strategy for Element IDs, and using the CSS selector.

Dynamic content is something we all have come to know and love as users of the web, but throws some wrinkles into automation and testing. I talked about developing tests that wait for the right event rather than sleeping for a speicific time period.

Lastly, we spent some time talking about data setup and cleanup, the need for non-dependent tests, and the use of test runner and/or tool capabilities.

The code and presentation slides I used are available on github, and I welcome feedback from those who attended the session via surveyMonkey and SpeakerRate.

I’d like to thank all the organizers and volunteers for putting on another great event this year. This was a lot of fun, and I hope to see you at next year’s event.

The First Three Months

Since starting my new job, I’ve often been asked how I like it and what I’m doing. The short answer is that I’m really enjoying it and that this is the greatest company I’ve ever worked for. I’m busy meeting people, talking and writing about testing, and learning more new things each week than I have in a long time. Here’s a quick list of my output (I’m not even going to try to list everything I’ve learned or all the great people I’ve met):

That’s just the first three months. Exciting times, indeed.

Specification By Example (ATDD at AQAA and ATLScrum)

Note: Andrew presented this workshop a second time last night. As with any presentation or workshop, it has evolved slightly over time. I’m incorporating my additional notes into this post. -Sjv 23-May-2013

Over the past week or so, Andrew Fuqua (@andrewmfuqua) has given workshops on Acceptance Test Driven Development for both the Atlanta Quality Assurance Association — an organization that, I’m embarrassed to say, I didn’t know existed until I heard about it via Twitter (of course) — and the Atlanta Scrum Users Group.

Andrew comes to the topic from his role as an Agile Coach and emphasized communication, communication, communication. ATDD, as we discussed this evening, is all about getting “the three amigos” — Product Owner (a role he asked me to fill for the AQAA workshop), Developers and Testers — together to communicate. The intent is to discuss the details of requirements (often written as User Stories these days) and distill them down into a minimum set of examples in order to provide clarity. Another, perhaps better, term would be Specification By Example.

Here are a few of my scribbled notes from the two evenings:

  • Why do software have bugs? Many reasons, but most often because of miscommunication between humans – especially around requirements.
  • We humans have a tendency to assume ill intent (why is this?) where often misunderstanding is more likely the cause.
  • A feature or product request often starts with some sort of concrete example, which gets thrown away as more general requirements documents are written. Let’s get back to examples as part of the requirements
  • Business rules are more likely to be stable than user interfaces. Therefore examples should be in business language.
  • Cognitive dissonance (discussion between people with different viewpoints, different skill sets, different ways of thinking) can facilitate exploration and improve understanding. Involve product owner, developer, tester, business analyst, customer if possible.
  • Don’t get lost in the details.
  • We all have too many meetings. Don’t create another one for this discussion/distillation activity. Hold a specification workshop instead.
  • Discuss, Distill, Develop, Demo (explore). Lather, rinse, repeat.
  • Distill the list of examples down to the bare minimum, a minimal set of both “passing” and “failing” examples. One example per business rule.
  • There is value in the discussion and distillation even if the examples are never codified into automated testing. Communication is the goal.
  • ATDD != TDD. TDD guides design of code. ATDD guides clarification of requirements.
  • ATDD is done before — and throughout — coding/testing.
  • Best captured in some sort of living document (no specific tool recommended, but a wiki was mentioned)
  • The results should be owned by the product owner, not developers or testers.
  • It’s not about testing, it’s about communication.
  • “Don’t invest in something that nobody gets value from.” – Claire
  • “Design a level of testing that is commensurate with risk tolerance. Don’t dabble in automation. Do it well to keep it – or toss it.” – Sellers
  • Understand Brian Marick‘s Agile Testing Quadrant model. (see this and this) – Alex

And some of the resources Andrew mentioned:

The discussions did also touch on the topic of test automation. Again, no specific tools or technologies were covered or recommended, but a couple of good points were raised. I was especially pleased to see many heads nodding in agreement when Andrew said that “test automation is software development, and should be treated as such.”

This was a good workshop, and I’d like to thank Andrew for his time (and for letting me help where I could be of assistance).

Book Recommendations by a Dozen

I was fortunate to spend two days last week with some very smart people as my company hosted a completely non-company-specific, non-tool-specific, non-technology-speicfic peer conference; twelve people in a room discussing the craft and profession of software testing, what changes we see happening and would like to see, and how we might be able to influence them.

On day two the question was raised: “what are you reading or do you recommend?” The following is the list that was produced. This is completely non-edited, everyone was welcome to post their recommendations and talk a bit about them. I am not endorsing all of these; many I’ve not read and a few I’d not even heard of before. Heck, there wasn’t agreement among all the participants on every book, some resulted in quite a discussion.

Note to the participants – I went from our hand-written notes on the wall; if I’ve mis-read something or found the wrong book, please let me know and I’ll update this.

disclosure: all these links are tagged with my Amazon Affiliate code — if you purchase through these links I’ll get a small percentage (which will undoubtedly go toward more book purchases), and I’ll be able to see what books were purchased (but not by whom; Amazon respects your privacy at least that much).


Why not sleep?

In my last post, about test automation, I wrote about using sleep : “Bad, bad, bad. Don’t do this.” But why not? Well, the way I was doing it there — until d.exists? — really wasn’t that horrible. What you really want to stay away from, and what I’ve seen people start out with, is sleep with a hard-coded time value. “But I know the app’s going to take a few seconds to be ready,” they say, “so I just put in a 5-second delay.”

Let’s look at — in human terms — what we’re talking about. Imagine for a moment that your extremely strict boss wants to know what colour taxi he’ll be taking to the airport, and sends you outside to look.
Yellow Cab
He knows the car’s due to arrive in the next five minutes, so his instruction is a simple one: “Go outside and wait five minutes with your eyes closed. At the end of that time, open your eyes. Look at the car at the curb and come tell me what colour it is.”

Can you see any problems with that task? I see two right away. First, it’s potentially a waste of your time and his. What are you to do if the taxi arrives before the five-minute mark? Nothing. He expressly told you to wait five minutes, then look at the car. If the driver’s having a good day and shows up early, too bad. You wait uselessly at the curb; he waits impatiently for an answer that you could have delivered earlier.

Secondly, he gave you no instructions on what to do if traffic is bad and the taxi isn’t there on time. You’ll have to go back and deliver a non-answer. Next time, he may decide that five minutes wasn’t enough, so he’ll try giving you a ten-minute wait time.

This is decidedly non-optimal.

“But wait,” you say to your manager, “Why don’t I just wait until I see a taxi, then come and let you know.” That is what you’d say, right? Of course it is. That’s exactly what we want our computers to say, too.

That’s the purpose of until. Give the computer a specific condition, and the flexibility to wait just long enough until that condition becomes true. In my previous Ruby/Watir example, we waited until an element existed, or until one contained a particular text string. Other language/testing frameworks have similar syntax. Using WebDriver, for example, you’d use wait.Until()

Until our computers get a bit more intelligent and understand what we mean rather than what we say (and given the potential state of computer intelligence maybe that’s not a good idea), we need to be explicit in our instructions while making those instructions flexible enough to work well in the real world.

Web Automation, A First Start

written a while back, not posted until today; this was about a year or so ago

My software testing (QA) team has grown over the years, often at a frantic pace. Like many others, I’m sure, there’s always been more work to do than time to do it in, and so saw sharpening has never gotten high enough on the priority stack. Instead the answer was to work more hours or hire more people, and continue with manual testing. As you can imagine that becomes unsustainable and stressful. At one point, while the development team was in the midst of a large re-write project, I carved out a bit of time get started with something that we had been talking about for years: automating some of our testing. This is the beginning of this tale, a story that — for that team — is not yet complete.

Where to start? The products under development are web applications, all custom-built using Ajax and JQuery running on a home-grown web servers and not created with automation in mind at all. One of the developers had even told me that there was “no way” I’d be able to drive the app from any tool or framework. The developers were as busy as ever, so I started poking around to see if I could find any methods that could work for us.

My goals were a bit loose at this point, but I knew that we needed a way to drive our application and check results for regression testing. We would need to be able to do it across multiple browsers, and that it would need to be done — at least initially — by the testing team without much assistance from the developers. Oh, and because this was still a side project, it needed to be free.

I looked at several tools and combinations of tools and decided to start with a combination of Ruby and Watir. My decision was based, I will admit, partially on Watir’s documentation and claims of ease but also on the fact that I wanted an excuse to learn Ruby. Let’s take a look at a few of those first faltering steps together.

The first difficulty was that of determining when the application was actually ready for interaction. This application never loads a new page or changes the URL at all. My first attempts were admittedly ugly and involved sleeping in loops waiting for elements to be created. Bad, bad, bad. Don’t do this. On the upside, here at least the developers had added id properties that I could use to locate the elements I needed on the login page.

require 'watir-webdriver'
username = 'AUserName'
password = 'APassword'
b = :ie
puts 'loading page'
puts 'logging on as ' + username
b.text_field(:id => 'UserNameText').set username
b.text_field(:id => 'PasswordText').set password
b.checkbox(:id, 'agreeCheckBox').set
b.button(:id, 'LoginButton').click
d = b.div :id,'documents'
until  d.exists?
  # puts 'watiting for documents page'
  sleep 1
puts 'login complete'

The next step was to navigate to a specific page. Here I’m performing some user actions and then waiting for a particular item to become available. This is a slightly better method than the sleep above; looping until a specific element is set to a known text string. Error handling takes care of things until that condition becomes true, which makes the loop a bit longer, and perhaps more ugly, but it’s at least a bit informative. My debugging-via-stdout lines remain for your entertainment:

# go to Passenger List
# click the menus, 'navPassenger').click, 'PassengerList').click
# wait for the page
d = b.div(:class,'pageTitle')
  until 'Passenger List'.eql?(d.text)
  rescue Watir::Exception::UnknownObjectException
  rescue Selenium::WebDriver::Error::ElementNotVisibleError
  rescue Selenium::WebDriver::Error::StaleElementReferenceError
s = b.select_list(:id => 'jumpPerPage').select '100'
puts 'List displayed: ' + d.text 
d = nil

At this point I was in a place to actually look at what’s important – to make sure that all the rows displayed had a memberID. There’s nothing here to see if the data’s actually correct; just to see that the app’s not returning any records that don’t have data in a particular column. This does assume that this page has all the data displayed, that there’s no pagination. Also, looping through through the .count method isn’t speedy, but for this proof-of-concept that wasn’t a concern.

# Get the rows of the table (assuming there is just one dataTable)
table_trs = b.div(:class, 'dataTable').table.tbody.trs
rows = table_trs.count
puts 'total rows on page: ' + rows.to_s

#Find how many rows have data in the 5th cell
rows_with_data = table_trs.count{ |tr|, 4).text != '' }
puts 'rows with MemberID: ' + rows_with_data.to_s

Here’s where I began to run into the second challenge, one that is pretty prevalent not only in this application’s code but wherever developers don’t have testability as an up-front design priority. Since the app’s pages are written to be as multi-purpose as possible — from the developers’ point of view — there are many places where they “just know” the structure of the elements and fill them with the correct data, the elements aren’t given id properties at all. That means that I also had to know the structure and code to it, and that when the application changes this test code will need to change as well. This application also lets the user determine which columns are shown, which will further complicate matters to say the least. Suffice to say, there’s a lot of room for improvement here.

Note that there’s no actual testing (or checking) being done here; no desired outcomes are defined and the script doesn’t report any sort of failure or success, it just shows some information about what the application’s displaying. At this point, though, I had enough to show my testing team what could be done if they’d dig into the world of automation, and to show our management what could be done if they’d provide some time for us to do so. It was also pretty obvious that I needed to get the development team involved, discussing ways of making the application more testable from the get-go, using elements, controls, and properties in a more well-defined and consistent manner. We also started looking at other tools, expanding our goals and perhaps even adding some funding.

As they say on TV, “to be continued…” — though the rest of that team’s story will written by others.

Moored Once More

Forms have been completed. anchors are down, and the next phase of my career has begun. This week, I signed aboard my new post as part of telerik’s software testing tools group.

at the office in Austin TX  (photo hosted by Flickr)

The department I’m joining is in Austin Texas. No, we’re not moving. I’ll be working out of my home office (and Atlanta Hartsfield-Jackson International Airport). As telerik’s Test Studio Evangelist, I’ll be engaging with the testing community: attending and speaking at conferences, visiting customer sites, blogging, webcasting, and the like, all with the purpose of raising awareness of methods & techniques of software test automation.

Most of all, I’m looking forward to learning a lot. Fortunately I’m going to be surrounded by some of the best people in software development & testing.

The adventure begins!