CompSci 307 Fall 2021 |
Software Design and Implementation |
In your OOLALA team, use today's lab time to focus only on testing (or refactoring) your project rather than adding new features.
Work on today's lab following these steps:
TEST_DISCUSSION.md
file and add it to the repository in the doc
foldertest/oolala
folder
testingLab
At the end of lab, use Gitlab's Merge Request to submit your team's tests and doc/TEST_DISCUSSION.md
file from its separate testingLab
branch to the team repository's master
branch. Make sure the Title of your Merge Request is of the form "lab_testing - participating NETIDs".
It is fine to accept this Merge Request after lab whenever your team is ready.
Code's functionality is verified manually by running it and automatically by writing tests. But how do you assess the quality of the tests? Does test suite quality and coverage matter? Yes! While all code likely contains bugs, it is vitally important to minimize them because software is literally a matter of life and death.
Think about your (soon to be) existing code and discuss what would be useful to test the code to validate that the model and view classes works as expected. Consider different strategies for thinking of the kinds of values that would represent "happy" and "sad" possible code paths (here are many, many, many specific examples rather than general strategies in case you are having trouble thinking of possible negative tests) and write down as many tests as you can (specific values for the model and specific user actions in the view).
It is impossible to guarantee code is completely free of bugs, but try to come up with a set of tests that check every line of code written works as intended. For example, check
Test data can come from data files, explicit data Strings, or explicit data Lists, but each test should be documented as to their purpose, i.e., how does these input values relate to the written code, both with a descriptive name and with traditional comments that explain what code it is testing and why. Resist the urge to simply create a lot of "random" tests (i.e., the shotgun approach) and focus on making each test as useful as possible (i.e., the sniper approach). Finally, make sure you know the expected output values for each test before you run the code.
Write the tests your team discussed for both the model and view as needed. As you create tests, try to develop a working rhythm by following this general process.
Create test classes with JUnit 5 test methods (code annotated with @Test
and checked with one or more assertions and which Exceptions were thrown) that attempt to demonstrate your code works as intended by setting up specific interesting situations. Additionally, use TestFX to simulate common user "actions" that are checked with standard JUnit assertions to verify the UI responds appropriately.
After you have written all the tests you can think of, check how many lines of code were actually executed, i.e., covered by tests. While tests that provide 100% code coverage do not guarantee your code is bug free, looking at what lines you missed can help you think of other tests to write.