Software Vulnerability Explanation

This research investigates how explainable AI techniques can help software engineers in identifying the root causes of bugs and understanding why suggested fixes are effective. Current automated bug detection and fixing tools do not provide developers with informative explanations, requiring security expertise to interpret these tools. During this project, developer-centric explanations tailored to varying expertise levels will be generated from software execution. Through studies with software developers, this research will evaluate and refine explanation quality, contributing methodologies for interpretable root-cause analysis and practical insights into developers’ explanatory needs.

Publications

No publications here yet.