Safety-critical decision-making often occurs in complex, uncertain, and partially observable multi-agent environments, where effective context awareness depends on accurately representing epistemic uncertainty and updating it with observations. This paper introduces a model-checking-based Version Space Learning framework that uses networks of timed automata to over-approximate the plausible hypothesis space and refine it with proof traces. A medical-diagnosis case study shows that the approach can reveal missing rules in traditional rule-based systems while remaining interpretable and scalable.