may also be of interest
The quality of the code does worry me. Refactoring the code will add bugs but the fact he won't release the original suggests that it isn't well written and so could have bugs.
The important thing is what steps he took to verify the code runs the model as expected and this can include subtleties like ensuring that the random number generator is adequate for the task (things like sequential correlation can be an issue). I would hope that there was a series of unit tests of overall testing to check it does what it says but I don't have much faith in the process by which research software at universities is produced.
But from a modelling perspective there are issues of how he validated the modelling techniques and model. Models aim to be a sufficient abstraction of the situation without over complication. But they basically play out a set of assumptions (either as distributions, things like spread values or structural features contained in the model) so there is always the question do they capture the situation. In this case this is a model of influenza spread so are the assumptions all valid for a corona virus model (for example, with the spread patterns).
I've not seen these things discussed but I've not had time to really look and dig.