Now that I have piqued your interest with the previous blog post on how to get started with unit testing, let us dive into some improvements to the module in the Mendix Marketplace that would make building your unit tests much more manageable.
Even though there have been updates to the UI due to Atlas updates and implementation compatibility updates, the UI and UX have not been improved much in recent years for the unit testing module in the marketplace. And as long as there are not many new features (which have been none), that is fine.
With all the new features built by me and Scott Perkins from eXp realty, there was room for improvement.
A more dense test suite header to make room for the new coverage information and additional buttons. A search button at the top of the unit tests for that suite to find that one unit test that you have been working on. And some much-needed fixes for weird UI issues on several pop ups and main pages.
Although not advised in previous versions and the improved version, it is possible to run your unit tests from the web interface as well as via the API. To keep track of a running Unit test execution (and prevent concurrent execution), the run status has been introduced, which may also be reset for convenience. When trying to execute concurrently, the module will return an error message.
The custom request handler has been replaced with a native REST service implementation. Making it far more transparent. On top of that, the response JSON has been extended significantly, and it is now also possible to retrieve the history of previous results or just the latest result via separate functions.
It would run on its own thread when executing the unit tests via the old method. This is now taken care of via the task queue. To avoid Java when not needed. Given all the new developments and the excellent native features, this was the logical next step.
Previously the constant Enabled only took care of the custom request handler. When set to false, the constant will disable any execution of the Java actions provided by the module. Essentially, this ensures that no unit test will ever run in an environment where it is not supposed to. Unless, of course, the constant gets set to true. For convenience, the constant is false by default to prevent accidents in production environments.
Suppose you want to add variation in the data of your unit tests. In that case, you end up with nearly identical microflows except for the input parameters and assertion values. Or you create one giant microflow that tests your unit of code multiple times. Neither is a great solution. Therefore, I have added the data variation functionality, which allows you to specify basic input parameters and assertion values. This will enable you to create one microflow that tests many data variations on the same code.
To keep things simple, the data variation unit test gets executed once to retrieve the data variation JSON and then once for each variation. This is captured in the results to allow for analysing the results per variation or high level for that particular unit test.
Even though there is always a lot of discussion on measuring coverage, you need to start somewhere. As basic as coverage goes, it could not be simpler. The module measures for any (not excluded microflows and modules) microflow executed while running the suite(s). Then, it calculates the total microflows, and the percentage executed is your coverage percentage. It does not consider potential variations or know if a microflow is unit-testable. But it gives you a start.
Excluded microflows and modules are, for example, marketplace modules and microflows starting with UT_ and TEST_. And you may extend this list with your exclusions. For example, shared helper modules are unit-tested in a different application.
The coverage is reported per test suite (module) and as a complete application roll-up.Modules that are not excluded and contain no unit tests will report a zero percentage coverage.
Previously when you would rerun your unit tests (all, per suite or a single test), the previous results would be lost. For quality tracking, progress tracking or simply some insight into what went wrong/right before. The module now stores the results in a history table, allowing you to go back to previous results and even export those results (high level) to Excel to share with your manager, who loves to update their boss with all the quality work their team is delivering.
While you could add multiple assertions to your unit test, it would instantly stop if you had an assertion fail. This could be circumvented by catching the error and reporting on the result, but those results would only be visible in the log messages, not in the UI. Now this will be shown, and the unit test continues to execute the rest of the assertions unless a critical failure is not captured. Allowing for a far more flexible execution of your unit tests.
The examples in the unit test module were lacking. Without the documentation page, you felt you were being dropped into the deep end. To make this easier, there are now several new examples with many annotations explaining how to build your unit test from start to end.
What is next?
There is much more to learn about unit testing, why it is essential and how to make it a core component of building your Mendix applications!
Next up, Simplified training!
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.