Explainable AI for Software Vulnerabilities

Pien Rooijendijk

Current automated vulnerability detection and fixing tools do not provide software developers with informative explanations, requiring security expertise to interpret the outcomes of these tools. This research investigates how explainable AI techniques can help software engineers in identifying the root causes of vulnerabilities and understanding why suggested fixes are effective. Investigating software vulnerabilities through user studies with software developers, this research aims to evaluate and refine explanation quality, develop methodologies for interpretable root-cause analysis, and provide practical insights into developers’ explanatory needs regarding software vulnerabilities.

People