Driving is a safety-critical activity, with most accidents caused by human errors. Advances in artificial intelligence (AI) offer the potential to reduce these errors, yet AI systems still face challenges in making socially acceptable decisions under complex and dynamic circumstances. To overcome these challenges, incorporating ethic-by-design principles is essential in developing autonomous systems. A crucial component of ethic-by-design is liability determination, which remains difficult to automate due to the need for retrospective analysis of vague and subjective criteria. In this paper, we propose a model-based liability determination framework that integrates legal doctrines with a driver behavior model. Our framework first formalizes liability determination based on established legal principles. The driver behavior model enables retrospective analysis by identifying reasonable alternative actions at each decision point, facilitating the assessment of duty breaches and proximate causes. We validate our framework through experiments focusing on highway driving accidents. We selected 8 representative simulated accidents and evaluated them with 5 senior traffic polices. The results demonstrate that our framework’s liability judgments closely align with those of the police, indicating high accuracy. Additionally, we examined the impact of breach of duty judgments on liability determination by utilizing Large Language Models (LLMs). The findings reveal that with only trajectories, LLM’s liability determinations were inaccurate, whereas incorporating these judgments resulted in outputs that matched police assessments. Our proposed liability determination framework effectively automates the process, providing accurate and interpretable results that support ethic-by-design in autonomous driving systems, thereby enhancing their safety and accountability.