paint-brush
3D Engines: A Comprehensive Guide to Automationby@vamelchenia
1,196 reads
1,196 reads

3D Engines: A Comprehensive Guide to Automation

by Valeriia AmelcheniaOctober 24th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When testing applications with embedded 3D engines or any other related to generating and processing computer graphics, regular functionality is not sufficient. It is also necessary to verify the visual appearance of the application, providing a thorough evaluation of its design and layout.
featured image - 3D Engines: A Comprehensive Guide to Automation
Valeriia Amelchenia HackerNoon profile picture

Why Visual Testing Is Essential for Applications With 3D Engines

When testing applications with embedded 3D engines or any other related to generating and processing computer graphics, regular functionality is not sufficient. It is also necessary to verify the visual appearance of the application, providing a thorough evaluation of its design and layout.


For instance, when working on a game written in Unity 3D, Quality Assurance is expected to assess all game levels’ logic, to ensure the absence of contradictions and unexpected behavior. However, this all does not touch how 3D characters and environments are displayed, whether the lights are reflected as they are expected, or if scaling objects retain these values on restarting the level.


Indeed, lots of scenarios that cannot be covered by regular testing.


All that has been described above is an example of what Visual testing is capable of handling.

But why cannot we simply keep an eye on the visual aspects of such applications instead of spending much more time covering all these scenarios? Well, it’s a really good point.


Considering a bunch of factors like high costs of development, maintenance, and additional runs, it is possible to assume it’s not worth the trouble. However, putting it all on humans can also invoke a number of problems:


  • Human error: Depending on the set failed threshold, the automation script will compare the actual image with the reference one pixel by pixel – a task hardly feasible for a human.


  • Inconsistent results: Since the application is tested by multiple individuals, the visual testing results can vary due to the peculiarities of the tester’s eyesight and visual perception of one or another asset, etc.


  • Repetitive regression scenarios: Just like in regular manual testing, automation scenarios allow us to reduce the time for regression checks and focus on other potential scenarios.


  • Costs of manual testing: These costs can often surpass the price of automation and subsequent maintenance.

What Visual Testing Ensures

  • Entire Interface page (All UI elements, their position, scale, color, etc.)


  • Visual 2D/3D assets in terms of geometry and polygonal for 3D and vector structure for 2D


  • Lights and Shadows (The effect that gives different sources of light to an object)


  • Materials (Rendering materials on objects in the current scene)


  • Specific shaders used for rendering


All these issues are better to be not fully replaced but at least be complemented by exact calculations of different percentages.

How to Test Web Applications

First, you will need to configure a Cypress project. Cypress is a multi-purpose automation framework that provides all the necessary tools to cover automation scenarios, including visual testing, which is our agenda for today.


Undoubtedly, there are many other tools that support visual testing, but covering all of them is beyond the scope of this article.

Installing Cypress

To install Cypress and configure your first project, I recommend looking through this article: https://learn.cypress.io/testing-your-first-application/installing-cypress-and-writing-your-first-test


Now, we are good to go.

Installing Cypress Plugin for Comparing Snapshots

Install a cypress-image-snapshot plugin according to these instructions: https://www.npmjs.com/package/cypress-image-snapshot.

Adding a Custom Command

Compared to what the article says, let’s add a custom command to the ‘commands.js’ file, or you can do it the same way I did – create a separate file dedicated only to your commands.


commands.js

Cypress.Commands.add('compareSnapshotWithBaseImage', (options) => {
  const nameDir = options?.nameDir;
  const element = options?.element;
  const fullName = nameDir.replace('cypress/e2e/', '');
    return cy
      .get(element)
      .matchImageSnapshot(fullName);
});


Here, I’ve added a custom command compareSnapshotWithBaseImage. When executed for the first time, it creates a reference image with a set name. During the subsequent runs, it compares the actual image with previous captures, marks it as a reference, and gives the difference image as an output if they differ more than the allowed failed threshold.


This command operates on a particular object passed with the options parameter.

Picking Locators

It’s important to pass a locator of the object but not the object itself. In my case, canvas.getCanvasSelector() returns the following string: ‘canvas#3DCanvas’. Just to remind you, the locator can be pulled out from the Inspector (Firefox) and Elements (Chrome) tab of the Dev console (F12).


When you click on the mouse cursor in the upper left corner, you are in the picking element mode.


Dev console in Firefox



Move the cursor over the targeted element (in my case, it is a 3D canvas), and click on it.

Here is an example of the locator found in the dev console:



To let Cypress know about this element, there are two options:

  • We need to identify the type of element (canvas). Use the hashtag sign (#) to concatenate a string with the element’s id (3DCanvas). The result string is ‘canvas#3DCanvas’.


  • Alternatively, you can simply use the element’s id preceded by a dot ‘.3DCanvas’.

Calling a Comparison Function

Therefore, I can invoke the comparison function from any part of the code using the following syntax:


And('I compare 3D scene with the reference image', () => {
  const specName = Cypress.spec.relative + '/';
  cy.wait(2000);
    cy.compareSnapshotWithBaseImage({
      nameDir: specName + 'canvas_img',
      element: canvas.getCanvasSelector(),
    });
});


Testing Results

Once this function is called, the first run of this test always passes as Cypress cannot find a file with a specified name and path and creates a new one.


Here is what I have after my first run:


Reference image captured in the first run


An actual image of a 3D scene with all its controls and a model of a motorcycle is captured as a reference one. This is a scenario where I am checking how changing directional light intensity affects the appearance of the 3D motorcycle.


As you can see, I have multiple sources of light in the scene, so the next run will ensure that changing the directional light of the entire scene does not affect these point lights.


A combination of these verifications is also possible and saves builds running time. For example, you can combine changing lights and setting the object’s transformations (rotation, position, scale).


Do not combine too many features into one as it makes debugging more difficult and running tests less effective if they fail, let’s say, on changing lights verification, you’ll never know how settings transformations work until lights are fixed.


Now, let’s emulate the situation when setting the light intensity does not properly work. To demonstrate this ‘bug’, I changed the value of light intensity from 1.2 to 0.6 to get a darker image of the motorcycle.


However, we remember that Cypress will compare with the reference image captured above, a lighter image. Let’s see what we have after this test run.


Failed run (The left one is the reference, the right one is the actual image, and the middle one is a diff image )


The left lighter image is a reference one, the right image is the actual image captured in the latest run. The middle image is a diff image. It shows what exact pixels are different in these two images.

If you manually compare the left and right images, you can certainly see the difference.


However, this difference is quite subtle. It is likely that during manual testing such broken lights will be overlooked, whereas automated tests will catch them.

How to Test Desktop Applications

General Information About Squish Framework

For desktop applications, I prefer to use the Squish framework by Froglogic company.


According to the official documentation, one fundamental aspect of Squish's approach is that the AUT and the test script that exercises it are always executed in two separate processes. This ensures that even if the AUT crashes, it should not crash Squish. Squish runs a small server, squishserver, that handles the communication between the AUT and the test script.


The test script is executed by the squishrunner tool, which in turn connects to squishserver. squishserver starts the instrumented AUT on the device, which starts the Squish hook.


With the hook in place, squishserver can query AUT objects regarding their state and can execute commands on behalf of squishrunner. squishrunner directs the AUT to perform whatever actions the test script specifies.


All the communication takes place using network sockets which means that everything can be done on a single machine, or the test script can be executed on one machine and the AUT can be tested over the network on another machine.


The following diagram illustrates how the individual Squish tools work together.



A step-by-step guide on how to create screenshots verifications can be found here: https://doc.qt.io/squish/how-to-create-and-use-verification-points.html#how-to-do-screenshot-verifications, while I would like to focus more on the interface of screenshots comparison as it looks very informative.


Working with Verification Point Viewer

I ran a test on 3D desktop software where my 3D model was moved along Y-axis. So, my screenshots verifications failed. Let’s open the Verification point and see the difference.


To do this, find the line about Verification point failure on the Test Result tab. Right-click on it, and select the View Differences option.


Test results tab


The first Differences tab comprises all the available options for analyzing screenshots.


Split View mode


Flicker: actual and reference images are being shown alternately, displaying the difference with a red outline.


  • Subtract: all common pixels are removed, and only the different ones are shown. The way the subtraction works can be influenced by changing the HSV (Hue, Saturation, Value) color settings, and by checking or unchecking the Invert checkbox.


  • Gray Diff: works pretty much the same as the Subtract mode, but instead of highlighting all pixel differences in color, it focuses on differences in grayscale. This means that it considers variations in shades of gray rather than color differences.


  • Red/Green Diff: Absolutely identical pixels are green, the areas that differ between actual and reference images are red.


  • Split View: The mode features a draggable slider that displays the reference image when dragged to the right and the actual image when dragged to the left (as shown in the image above).


Besides the visual representation of screenshot differences, there is one more tab of the Verification point viewer called Comparison Mode. The Comparison mode is crucial for specifying how the actual application behavior should be compared to the expected behavior.


It defines the criteria for determining whether the verification point passes or fails and what is the acceptable failure threshold.


Several other options and settings are available for the Comparison mode, but I would like to talk more about color histograms.


Here you can see the Histogram for my 3D test model.


Histogram interface


It is pretty useful for cases when the color profile didn’t change significantly, but the object was rotated, scaled, or transformed, exactly like what happened to my object when I moved it along Y-axis.


How it works:


  • Divides the color range (0-255) of each color component (RGB) of every pixel by the number of Bins (or baskets) and calculates the number of pixels in each bin.


  • Divides the total number of pixels in the image by the number of pixels in each bin to get normalized values. These respective values are put back into the corresponding bins.


  • The values of all corresponding bins are subtracted from one another and the resulting values are summed up. This value represents the difference between the reference and actual images.

    This mode lets you configure the number of Bins and Allowed failures, which represents the maximum difference between two images for which they are still considered to be "equal." The interface of setting the number of Bins and Allowed failures is shown in the image:


Histogram settings


Conclusion

Long story short, I hope I helped you understand that visual testing plays a significant role in testing applications with visual assets, such as design platforms, games, game development engines, 3D modeling, and engineering software. Automating the detection of most visual defects can mitigate the impact of the 'human factor' and facilitate further analysis.

Resources:

https://testsigma.com/guides/visual-testing/#What_is_Visual_Testing

https://www.coderskitchen.com/visual-testing-of-a-unity-based-3d-world/

https://learn.cypress.io/testing-your-first-application/installing-cypress-and-writing-your-first-test

https://dev.to/bornfightcompany/cypress-tests-in-bdd-style-52n5

https://doc.qt.io/squish/screenshot-verification-point-dialog.html


Header image: Image by pikisuperstar on Freepik