Postdoctoral Researcher, LASER
Building quality software products, with as few defects as possible, is an important goal for software developers. Static analysis tools have the potential to provide quick feedback to developers, helping them to eliminate defects early in the development process, when they are cheap to fix. Despite the potential benefits of using these tools, developers end up spending a lot of time figuring out the context of the defect and how to fix it and therefore do not make frequent use of these tools. Our research aims to find out why developers are or are not using static analysis tools and how we can help make using these tools easier and more efficient for the developer. For this project, I conducted 20 interactive, participatory interviews with industry software developers in the hopes of learning how they use static analysis tools, what they expect from a static analysis tool and how current tools live up to these expectations.
The goal of this research is to understand how expressive and scalable current tools are, how these traits can be increased, and how increased expressiveness and scalability affect a developer's ability to create software. For this project, I integrated this research with an intro level Java programming course at NC State (CSC216) and also conducted one-on-one sessions with students and expert developers. At the conclusion of this research, we hope to have a better understanding of how programmers with varying levels of expertise use and understand program analysis tool notifications and what can be done to alleviate any difficulties they may come across.
The goal of this research is to discover how to model programmer knowledge and use such a model to more effectively communicate with programmers. For this research, I am analyzing student and professional developer GitHub repositories, evaluate the effectiveness of notification adaptations, and evaluate the ability of my prototype to determine the appropriate adaptation for a given programmer. At the conclusion of this research, we hope to have be able to provide information, frameworks, and even tools that help developers interpret and resolve tool notifications.
Software testing is one method developers use to improve software quality and versatility. Current testing approaches help developers write and run tests, and may even present correlations to test failures. However, they do not attempt to point out the cause of a given test failure or unexpected result. The goal of this research is to create a new testing discipline that will help developers identify, trace, and compare passing and failing test executions to build better, fairer software.