Code Coverage

Code Coverage for JS UI/UX Applications

What is code coverage:

Code Coverage is a measurement of how many lines/blocks/functions/etc of your code are executed while the automated tests are running.

Code coverage is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good tool will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during particular test.

It is one of the QE best practices.

Problem:

It is pretty straightforward to calculate code coverage for same side technologies. Example, if development code and Test Code in Java or if development code and test code in JavaScript. But the real challenge arises when the development and test code and written in 2 or 3 different technologies.

In our case, Challenges are:

  • development code fully written in JavaScript and Test Code fully written in Java
  • the test code is not part of the dev code’s repo
  • they are in two different repos

Solution: Istanbul

Steps to calculate the Code Coverage:

  1. In your project, create a package.json file. Add the dependency in your package.json from https://www.npmjs.com/package/nyc Example of package.json
    •     "dependencies": {
            "pearson-elements": "^1.0.6",
            "nyc": "^11.2.1",
            "js-beautify": "^1.6.14",
            "@pearson-components/app-header": "^2.0.3"
          }
  2. Do npm install. This will install nyc along with other dependencies. You can verify it like below. Double check it in the node_modules directory in your project workspace
    • $ nyc --version
      11.2.1
  3. Let’s assume we have a js file called test.js. It has all your javascript logics for your application. Instrument this js code.
    • nyc instrument test.js >> test-instrumented.js

      This will produce a instrument code in the file test-instrumented.js. This is just not to disturb or manipulate the original file.

  4. Make all your references to test-instrument.js file instead of test.js.
  5. Make a trial test run to see your page loads without any issues.
  6. Start your test automation suite.
  7. Instrumented JS code is run by the browser, a global object on the browser is populated with coverage data.
    • window.__coverage__ = {};
  8. On the backend, Selenium WebDriver can read this object in the browser, and send the data to our test process, using this bit of code:
    • Object str = js.executeScript("return window.__coverage__;");Once all the tests are run, we are going to run this piece of code, I put it in the @AfterClass. So put it at a point where all the tests are run
  9. Once all the tests are run, we are going to run this piece of code, I put it in the @AfterClass. So put it at a point where all the tests are run
    • @AfterClass(alwaysRun = true)
      private void afterClass() throws IOException {
          js = (JavascriptExecutor) driver;
          Object str = js.executeScript("return window.__coverage__;");
      
          GsonBuilder builder = new GsonBuilder();
          Gson gson = builder.create();
      
          String coverage = gson.toJson(str);
          Files.write(Paths.get("path_to/.nyc_output/coverage.json"), coverage.getBytes());
      }
  10. This will return a coverage.json file which is dumped with all the coverage information.
  11. It’s time to generate the html report out of it. Go to your coverage.json directory and run the below command
  12. This would generate a directory called coverage in the root dir.pastedImage0
  13. In the coverage dir, you will see a index.html file, open it to see the results
  14. pastedImage0

Results:

pastedImage_24

Here comes the interesting part:

So here is some catch:
  1. When the browser session is closed, we lose the windows_coverage browser object that had collected some data. If you run the window_coverage script after the window session is closed, you will see it returning null.
  2. Same would happen when you reload the page, refresh your browser, or even navigate to a new page, the windows_coverage object will lose the previously collected data.
  3. What if we have to run the tests in parallel ? Each window should have its own windows_coverage information, and all of them have to be aggregated.

Technically we need to collect the data in an aggregated manner. That means before we close the browser window or perform an action that would reset windows_coverage data or at the end of each test we need to preserve it and move it to a dedicated server that collects the incoming coverage information.

Istanbul middleware helps us to accomplish this.
We need to follow these steps:
  1. Have a dedicated coverage-server app, clone this and run the below commands to start the server on port 3000
    • npm install
    • node src/index.js
      Go to localhost:3000/coverage to see if it started. You will see an empty page
  2. @AfterMethod: at the end ofafter every test, run this method: postCoverageData();

    postCoverageData() is a custom method written that has the logic to send the windows_coverage information to localhost:3000 via a http POST request. 200 OK is the correct response to make sure the http POST request was success. 

    public void postCoverageData() {
    js = (JavascriptExecutor) driver;
    Object str = js.executeScript("return window.__coverage__;");
    GsonBuilder builder = new GsonBuilder();
    Gson gson = builder.create();
    String coverage = gson.toJson(str);
    //System.out.println("coverage: " + coverage);
    
    //setting up http post request
    URL url = null;
    try {
    url = new URL("http://localhost:3000/coverage/client");
    } catch (MalformedURLException e) {
    e.printStackTrace();
    }
    
    HttpURLConnection connection = null;
    try {
    connection = (HttpURLConnection) url.openConnection();
    } catch (IOException e) {
    e.printStackTrace();
    }
    connection.setConnectTimeout(5000);//5 secs
    connection.setReadTimeout(5000);//5 secs
    try {
    connection.setRequestMethod("POST");
    } catch (ProtocolException e) {
    e.printStackTrace();
    }
    connection.setDoOutput(true);
    connection.setRequestProperty("Content-Type", "application/json");
    OutputStreamWriter out = null;
    try {
    out = new OutputStreamWriter(connection.getOutputStream());
    } catch (IOException e) {
    e.printStackTrace();
    }
    try {
    out.write(coverage);
    } catch (IOException e) {
    e.printStackTrace();
    }
    try {
    out.flush();
    } catch (IOException e) {
    e.printStackTrace();
    }
    try {
    out.close();
    } catch (IOException e) {
    e.printStackTrace();
    }
    try {
    int res = connection.getResponseCode();
    } catch (IOException e) {
    e.printStackTrace();
    }
    //System.out.println(res);
    InputStream is = null;
    try {
    is = connection.getInputStream();
    } catch (IOException e) {
    e.printStackTrace();
    }
    BufferedReader br = new BufferedReader(new InputStreamReader(is));
    String line = null;
    try {
    while ((line = br.readLine()) != null) {
    System.out.println(line);
    }
    } catch (IOException e) {
    e.printStackTrace();
    }
    connection.disconnect();
    }
  3. To download the final consolidated file from the coverage-app
    • @AfterSuite(alwaysRun = true)
      public void afterSuite() throws Exception {
      URL url = new URL("http://localhost:3000/coverage/download");
      HttpURLConnection connection = (HttpURLConnection) url.openConnection();
      connection.setRequestMethod("GET");
      InputStream in = connection.getInputStream();
      FileOutputStream out = new FileOutputStream(INPUT_ZIP_FILE);
      commonUtils.downloadZip(in, out, 1024);
      out.close();
      commonUtils.unZipIt(INPUT_ZIP_FILE, OUTPUT_FOLDER);
      }
      }
    • The coverage-app should be running when the tests are running. downloadZip() and unZipIt() are some custom methods I wrote to download and unzip the coverage directory.

  4. Run the nyc report command, and open index.html to see the full consolidated accurate report.
  5. unnamed.png

Publish it to SONAR dashboard:

  1. As you see there are lcov files generated. lcov.info can be parsed by SONARQube to see the results on the dashboard.
  2. Have sonarqube configured with your project.
  3. In your pom.xml have these properties:
    • pastedImage_3
  4. Run:
    1. mvn clean test
    2. mvn sonar:sonar
  5. Make sure there are no errors during the sonar analysis.
  6. Open you SonarQube server to see the results:
  7. pastedImage_40

Happy Code Coverage 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s