The model for test parameters has been a bit confusing to me. I think it's because there are two different audiences that we're trying to cater to.
1. End users. For them, there should be good defaults, like the hard-coded expected value here. In order to run the test, a user really shouldn't have to think about anything, and something reasonable should happen. In the same way, we should get to a point where we automatically fetch a reasonable test deck and run reasonable tests for users who just want to understand or report on what's going on without learning the intricacies of ooni.
2. OONI developers / researchers debugging interference mechanisms. Here, someone is trying to dig deeper into what's going on through interactive testing. In this case, it seems like it would be plausible that they might want to have hooks exposed for changing how tests work, especially across a bunch of remote probes, without needing to redeploy changed code. This specific hook seems like it wouldn't commonly need to be changed, but I could imagine someone wanting to re-use this test to see which domains special-case lantern or if there are issues with lantern requesting back into a country by modifying this parameter.
My gut feeling is that the easiest way to serve both audiences is to leave a way to explicitly set the parameter, but focus on good, hard-coded defaults that we expect to be used for collected reports.