Wednesday, 3 August 2016

Things Developers say - But it works for this case

I love my developers, some of the smartest people I've worked with, but every once in a while they say something odd about why a problem isn't a problem.
(I really need to start writing these down when they happen for better posts.)

There's a feature on Skystore, where if you rent a movie but don't play it for 28 days, we'll send you an e-mail reminding you there's only two days left to watch. The subject of the e-mail is something like: "Only 2 days left to watch [movie name]".
As you can imagine waiting 30 days to test something like this is not feasible, especially for an automation enthusiast such as myself.
Luckily we have ways to trigger this e-mails before the time runs out, which leads to an e-mail with the following subject: "Only 0.0416666666666667 days left to watch".
I didn't bother to see the code on how two days turn into less than half a day if you force the service to run, but got some reassuring that this only happened if you force to service to run earlier. It's fine, we log a bug on the issue tracker the same, just not a "fix it now" kind of bug.

A couple days go by, a dev comes back to me:
- You know, that has to do with the time left calculation, if we wait 28 days the e-mail will look well.
- So you're telling me it's fine because for the current business requirement that calculation happens to give a whole number?
- Yes.
- This value is configurable, so if someone decides it's supposed to be 3 days instead of 2, we change the config value and our customers start receiving mails telling them they have 2.333333333 days to watch?
*blank stare* - Ok we'll take care of it next sprint.
Yes I did say "3" that many times out loud.

Wednesday, 29 June 2016

Watir Webdriver / Selenium HTTP Authentication - if it's stupid and it works, it ain't stupid

I've taken some time to write a small sanity test pack to run on the web app that uses the platform my team develops.
Although we have a very good and thorough automated test coverage, I'm very paranoid and want to make sure we didn't miss anything that breaks basic functionality, not that it happens often, or in recent memory, I'm just paranoid - it's a good quality for QA person, right?
So I went about the correct way of doing things, Page Object Model; I have a basic page object that has functionalities that are relevant through the whole app, like knowing when page is fully loaded, how to login, etc. Everything else inherits from this class.
First problem, http authentication, selenium and watir still (!!!) don't have a proper way of dealing with this well, so I went with the normal "https://username:password@url." method, which by the way, nobody else tells you this: use URI encoding on the username and password.
But then I got to the playback page on our app (which is kinda an important page if you're online movie retailer), for reasons not important for this post, on this page there's a https -> http redirect, this redirects cause our username:password deal to be lost and I got stuck with the authentication popup.
Googling how to get around this today still gives me about the same answers as a few years ago, which include this lovely (yet old) Watirmelon post that suggest using AutoAuth plugin for Firefox, usually I'd be against this, since it binds my tests to Firefox, but since for my particular sanity-for-other-team kind of work, I don't really mind, however AutoAuth no longer works with recent versions of Firefox, it stopped working on v39, currently we're on v47.
After much dive into documentation and stackoverflow questions, I gave up, I was ready to ask them to turn off HTTP Authentication for that page, until a co-worker casually mentioned:
Try opening the http version with username and password on the url first, at the start of your test, then go to the normal https start page and do the normal test, the browser will remember when you get redirected.
That a stupid solution... but it works, so it ain't stupid.
Here's how my code changed:

      browser.goto inject_auth( url ) if url


    if url
      browser.goto inject_auth(url)
      browser.goto inject_auth(url).sub( 'https', 'http')+ 'playback/fcf7ce62-5695-4d80-be85-662798c9ac7c?token=undefined'
      browser.goto inject_auth(url)

Hacky, I know, I have to load the normal page first, because I set up the http load purposely to hit a 404 so this wouldn't cause any unintentional side effect, but it also gives me another redirect, only after this little dance I get to go to the initial page I intended.
Hope this helps other people trying to automate apps with redirects.

The Watirmelon post I mentioned before resolves the problem with NTLM  auth, which could be applied to everything, however my solution doesn't solve that particular problem, if you find out how, please let me know.

Monday, 18 April 2016

The importance of using proper naming in your tests

The Skystore project, which I'm the lead tester for, still has a lot of unreleased features, while the reason for this are beyond the scope of what I intend to talk here, one particular feature brought down an important, albeit obvious lesson: Naming matters.

One feature that has been worked a while ago, but is still unavailable, is called "playnext"; it's a simple feature: when giving out video playback information to the client, the server will include some information about what to suggest the user to watch next, some time before the end of the video, the client will display a little box showing that information with an easily accessible play button.
This feature has several tests along with it as what should be suggested varies in accordance with different conditions which have all to be tested and made sure the server is giving out the correct info and structure.
As an automation evangelist I've automated the shit out of this feature, every normal, erroneous, corner and edge case was covered and everything was saved into GIT, available to everyone and runs everyday on every integration environment - good times.

After client applications implemented their side of this feature and found it lacking, it was decided that a new feature would be implemented server side, it would behave just like playnext, but with a different structure that would better satisfy the needs of the clients giving more complete and extendable information a "playnext 2.0". Since different teams have different priorities and implementation times, it was decided that the two features would run side-by-side for a while, to preserve those who had already implemented the first version.
When deciding which tests would be needed, I've said to my minion responsible for it: "Look at the playnext tests, copy them, adapt to the new structure, see if there's anything missing or need to be removed and you should be done quickly".

Later the feature was developed and tested, approved and delivered upon the client teams. Then they noticed that there were several missing parts and it was mostly unusable.
How did this pass the tests? It was obviously broken! My minion told me "I did what you told me, I've copied all the tests I found, there weren't many though"

I've looked for playnext in the feature files, found only a couple of tests, but I know I wrote a bunch of them, what happened? I looked into one file I knew had a few of the tests and found the problem:

play next... play next play next....

There it was, in the tests I named the feature "play next" instead of "playnext", when searching for the correct name, only a couple tests had it.
This explains why he thought there weren't many tests.

So, when naming and describing your tests and features remember:

  • Describe your tests using the correct feature name.
  • Make sure to use the name consistently (ie: don't change the name mid way).
That's it. After this event the tests were corrected, the missing testes added, the feature was patched and integrated with and everybody is happy, hopping it's well received when available to a wider audience.

Saturday, 15 August 2015

Using Dropbox to sync saved games (and more) across multiple computers

Over three years ago I made a post on Reddit explaining how to do this, since then Steam introduced cloud saving, so all saved games were sync'd without a second thought, I kinda grew used to it. Long passed is the time when I'd backup my saved games on CD's and DVD's with fear of losing all those game hours to a computer failure.
However I got hit with this problem recently when I bought Banished on GOG, while it's great that they sell all the games DRM free, it's a shame they don't have cloud saving yet, so I went back to the old technique and decided to share it on a more permanent place than reddit.
Please note that instructions cover Windows 7, Linux and OSX, now Win 8 or 10, honestly I don't know if shell link extension works there, if there's an alternative and as of now, I don't care.
The original post:

I started using this technique to sync saved games so I can continue my massive Civilization games when my gf is sleeping at her house. It's the same I've used for years to sync my pidgin settings and .vimrc across my linux machines, but yeah using it for gaming:
First of all you need [...] Dropbox (why on earth are you not using this yet anyway?) make and account and stuff. If you're on Ubunto it's available on the software center thinggy.
Windows 7 only: Get Shell Link Extension 
(my pictures examples are with wow addons and settings on windows 7)
Now find what you want to sync and MOVE it to a folder on dropbox:
In this case I'm moving the Addons folder
On windows, drag the folder back to where it was with the right button and select Drop Here -> Symbolic Link, it will pop up an authorization thinggy, but it's ok.
On Mac and Linux pop open a console and use the "ln -s /path/to/drobox/place/where/you/put/your/stuff /original/path/thing" command
And you're done! Yey
Bonus, you can share folders with your friends so you all can use each other's computers to play or whatever, for instance, next I'm making sure me and my gf can play WoW on any of our computers with the same addons and addons preferences.

Not that I condone this kind of thing, but if you got you pc games at the special Bay shop and thus don't have steam, you can use this technique.

Thursday, 26 February 2015

Things Developers say

As a QA person I've been told by developers a myriad of rather amusing things or funny conversations caused by failed tests and the all too often "works on my machine".
I've been meaning to compile them, but I'll just start making posts as they happen/I remember.

All names and locations have been removed for privacy of those involved.

$Dev: On $ticket you mention that the $behaviour happens when it shouldn't. However on the requirements it says $behaviour is expected.
$Me: Oh, I must have missed that, where does it say so on requirements?
$Dev: Well, something things are common knowledge and shouldn't need to be written anywhere.

$QA2: This doesn't work
$Dev: Of course it works, look *press button on his laptop* it works, you're testing it wrong.
$QA2: But that's your machine.
$Dev: So? It works.
$QA2: "Works on my machine"
$Dev: It works!
$Me: Great, $Dev we should tell $Boss to arrange $Ops to have your laptop sent to $location.
$Dev: Why?
$Me: Your machine will be the new prod server obviously, since it only works on your machine.

At this point $Dev helped $QA2 to find the root cause of the problem, things were indeed broken.

$Me: This doesn't work when you use $resource_type.
$Dev: Can't reproduce, did what arguments did you use? Can you retest please in the RC and master branch please?
$Me: I used the arguments of a $resource_type, here's the error stack trace on both branches.
$Dev: I used $other_resource_type and it works, are you putting the arguments correctly? Or are you using dummy/broken data?
$Me: I used $resource_type, if I had used bad data I'd have said so in the first place. Just try $resource_type.
*couple minutes go by*
$Dev: Ok this doesn't work, I'll fix it.

Friday, 30 January 2015

Ruby: Custom Errors for Lazy Bums with const_missing

Catching errors is quite a normal thing when writing code, every programing language has some variation of the throw/catch, in Ruby's case, it's the elegant begin, rescue and ensure blocks.
At some point in every project you stop catching "other" people's errors and start catching your own.
So you start creating you own error classes, there's a couple common approaches I've seen over the years.

  • Create an Errors namespace on each class/module you need and create errors for that there.
  • Create a generic Project::Errors with its own file bag and throw all errors in there.
I usually go for the second approach, so error classes don't distract me, but that's more of a personal preference anyway.
However, I am a lazy bum and I've got tired of specified my error classes, so I made this Gist as an example:

The main idea behind this, is letting you freely type your errors as you need them both where they're being raised and rescued and you don't need to worry about they existing since they'll be created in runtime for you.
This method is probably not very efficient, but I don't care for this case.

Wednesday, 10 December 2014

JUnit test reports not shown on Jenkins for Failed Jobs that never passed


This one costed me a whole morning of work.
I've been reorganizing my Jenkins test jobs so they're more friendly to outside teams, I've decided to be cautious and not just delete all the existing jobs, instead I've created a branch on my test repository, I made a lot of changes, then, I created new jobs on jenkins fetching from that branch, also made a new Dashboard that picks up those jobs. Ideally, the new dashboard should have about the same number of tests as the old one.
To my dismay, the new dashboard registered ~200 tests while the old one had over 1k.
All configurations were exactly the same (in regard to publishing results) from jobs to jobs, but some had the info right there on the dashboard, some didn't, worse, the ones that didn't, had the HTML report just fine.
I noticed that the jobs that didn't have the numbers were failing, while the others were passing/pending.
Digging through the internet yielded nothing particularly useful, I knew the problem laid somewhere between Jenkins and JUnit test reports.
I went through XML + XSD validators on the tests reports generated, but nothing was wrong.
After a coworker joined the struggle, he noticed that if we forced one of the failed builds to pass, all of the sudden  all the info would magically show up - including the ones from previously not shown builds.
I started asking folks I knew used a similar setup as I do and got this fantastic insight (paraphrasing):

If you're using Jenkins ver > 1.581 you're screwed. They've split the JUnit reporting into a plugin, it breaks like that after version 1.2, but Jenkins forces you to update it after version 1.581.
So I had two options, either I went through my 44 jobs and forced them to pass once and then went through them again to revert OR I'd downgrade Jenkins and JUnit plugin.
Once again I'm reminded of a sentence I hate "If it works, don't update it".
In case that version of Jenkins or the plugin aren't downloadable anymore, I've made a mirror on Dropbox.