Add-on, pytest/unittest and TravisCI integration example

add-ons

(mavek) #1

Can any one point me in the direction of some good test example for maintaining add-ons?

I was hoping to be able to write a series of tests to allow smoother migration of add-ons. Ideally the tests should be called straight from the command line (not console) and can be run an multiple versions of blender. (2.79 and 2.80 right now) using a continuous integration tool like TravisCI to catch when things break. My ultimate goal is to have a series of tests ready for 2.80, at this point it is 2.80 that is the component that is changing the most, so I was hoping to open up some visibility into it.

I know unittest comes with blender, however pytests seems to be the tool of the future (I have used pip to install a local version of pytest). Currently I cannot get either to pick up any tests. I would usually start googling at this point but this type of problem does not seem to come up enough to solve.


(Ben H) #2

Due to the way Blender uses python, your best option is to run your test scripts directly through Blender via the command line. You will need to invoke unittest or pytest programmatically from within your test script instead of using the separate command line utilities, but otherwise everything should work fine.

https://docs.pytest.org/en/latest/usage.html#calling-pytest-from-python-code


(mavek) #3

OK @brhumphe that was helpful, but only enough to get me to the next problem.

I did as the first link said. Called pytest from inside the file called by blender. It appears to work … ish.

I am able to get it to setup up the environment, which in my case is installing a new addon, using pytest_sessionstart. And I can get it to breakdown the environment, remove the installed addon, using pytest_sessionfinish. Logging to the screen shows me that it is working.

I start to struggle when I try and get it to run any tests. The test I am trying to get working first, more as a proof of concept, is a read back of the version of the addon. I am not too sure where this test needs to be. In side the “MyPlugin” object, outside as it own function to be collected by the pytest?

I first put it inside pytest_runtestloop, which is a built in method. It does nothing (correctly) when the readback value matches the expected. However when I give it an incorrect expected value, I should expect a test failure, however I get an assertion error, it is like the assetion error is correct, but is not been linked back to to pytest. I suspect that pytest_runtestloop is not the correct place to put an assertion check

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\_pytest\main.py", line 185, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\_pytest\main.py", line 225, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\hooks.py", line 284, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\manager.py", line 67, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\manager.py", line 61, in <lambda>
INTERNALERROR>     firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "E:\blender-2.79.0-git.1195a4a040b-windows64\2.79\python\lib\site-packages\pluggy\callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "E:\blender-plugin-test\toast_fake_addon.py", line 37, in pytest_runtestloop
INTERNALERROR>     assert expect_version == ret_version
INTERNALERROR> AssertionError

(Ben H) #4

You shouldn’t have to manually call those pytest functions. Running pytest.main is effectively the same as running pytest in the terminal, so you should be able to have normal test files with the usual fixtures and so forth, and pytest.main should be able to find and execute those tests as normal. I’m not knowledgeable enough about pytest to know what else might be going on, but the main thing is to execute your tests from within blender.


(mavek) #5

@brhumphe. Yes! That was it, i was a little too close to the problem to take that cognitive step back.

I went and rewrote my code, as you suggested, turns out I had file starting with test_ (don’t do that) and a regular function starting with test_ (don’t do that either).

After that I was able to create a flow with pytests are reporting running correctly, when correct and fail when designed to fail.

I am really happy with this result (so far). I planned to capture the output of this work on a webpage some where when done so that others don’t have to bang their heads against the wall.

Thanks for the help.


(Ben H) #6

Why is it a problem to name files and functions with test_*? That’s the standard convention for Python test discovery, including with pytest, so it should work: http://doc.pytest.org/en/latest/goodpractices.html


(mavek) #7

Yeah, it is just a bad habit, got that cleaned up fairly fast, but the type of silly thing that can hold you up for ages.

Thanks again


(mavek) #8

OK I successfully put pytest into blender and have it deploying onto TravisCI. I have created a test that is supposed to fail, the expected result does not match the fetched result and even though in the TravisCI run does show the failure correctly in the log,

E       assert (1, 0, 1) == (0, 0, 1)
E         At index 0 diff: 1 != 0
E         Use -v to get the full diff
tests/test_pytest.py:11: AssertionError

it does not seem to connect the failure within blender under pytest as a general test failure at the TravisCI level, probably because blender exited ok.

https://travis-ci.org/douglaskastle/blender-fake-addon/builds/476605512

The command "blender_build/blender_${BLENDER_VERSION}/blender --background --python "tests/load_pytest.py"" exited with 0.

Is there a way to get TravisCI to collect the results after from the blender run?


(Ben H) #9

Ideally there would be a way to get TravisCI to scan the logs to detect failed tests. I have no idea if that’s the case, you would be better off asking on Stack Overflow.

Failing that I can think of two other options:

  • Setup the tests so that if a test fails use sys._exit to raise a non-zero exit code and cause a crash to signal failure.
  • Compile blender as a Python module so you can import bpy into whatever environment TravisCI uses. This would be a lot more work and you would end up rerunning tests within blender anyway to figure out if the problem is from your code or the Python module.

(mavek) #10

Not looking to compile my own blender that is not the goal here.

Not too sure if the sys._exit will propagate out, I think blender catches every thing.

Parsing the logs is probably the answer, but it i think is like reinventing the wheel as calling pytest from command line I think you get it natively.

I think you’re right this now feels more like a question in the space for stackoverflow


(mavek) #11

OK success. it turns out the sys.exit was all that was required

exit_val = pytest.main(["tests"], plugins=[SetupPlugin("fake_addon")])
sys.exit(exit_val)

but I was also calling blender from a python script and that was also catching and suppressing the error from travisCI, that that too needed its own sys.exit(1) when a failure was reported from blender.

Any way I got it all working. I have a setup that will run a basic test against a addon daily on the nightly builds of both 2.79 and 2.80.

The addon is as simple as it can get and still be called an addon and all that is been tested is the reported version.

If anyone in interested in getting something similar going you can look at my work here:

And here are where the travis logs are for comparison:

https://travis-ci.org/douglaskastle/blender-fake-addon