Software analytics is a modern term for the use of empirical (mostly quantitative) research methods on software data.
In this lecture, we will:
Quantitative software engineering is a subset of empirical software engineering, a discipline that
D: Can you identify potential applications of quantitative Software Engineering?
Empiricism is a philosophical theory that states that true knoweledge can only arise by systematically observing the world.
Types of empirical research:
Empirical research requires the collection of data to answer research questions (RQs).
Qualitative research methods collect non-numerical data
Quantitative methods using mathematical, statistical or numerical techniques to process numerical data:
A hypothesis proposes an explanation to a phenomenon
Defined in pairs
A good hypothesis is readily falsifiable.
Most statistical tests return the probability (\(p\)) that \(H_0\) is true.
To interpret a test, we set a threshold (usually, 0.05) for \(p\)
If \(p <\) threshold, then the null hypothesis is rejected and the default one is accepted
Need to know before hand what statistical tests do
A theory is a proposed explanation for an observed phenomenon. It (usually) specifies entities and prescribes their interactions. Using a theoretical model, we can explain and predict
Q: How can we build or dismantle a theory?
Theories are built by generalizing over consecutive research results.
A single contradicting data point is enough to reject a theory.
Extract samples of data for a running process. Data types:
McCabe’s complexity : Attempt to quantify complexity at the function level by counting number of branches.
Halstead software science : Attempt to generate laws of software growth
Curtis et al.  found that: “All three metrics (Halstead volume, McCabe complexity, LoCs) correlated with both the accuracy of the modification and the time to completion.”
they just work!
Boehm  defined the COCOMO model, and effort to quantify and predict software cost:
\(a, b, c\) and \(d\) were collected through case studies.
Both COCOMO and function points are widely used today for cost estimation.
Manny Lehman  defined a set of laws that characterise how software evolves (and ultimately predict its demise)
Using metrics to define product and process quality
Basili : The Goal-Question-Metric approach:
A goal is stated as follows:
|Object of study||A tool or a practice|
|Purpose||Characterize, improve, predict etc|
|Focus||prespective to study the problem from|
|Stackeholder||Who is concerned with the result?|
|Context||Confouding factors (e.g. company, environment)|
The GQM approach is another way of describing the scientific method.
Mockus et al: “Two case studies of open source software development: Apache and mozilla” 
Not the first to use OSS data, but:
von Krogh et al.: “Community, joining, and specialization in open source software innovation: a case study” 
Defined the, now obvious, vocabulary of OSS research:
Herbsleb and Mockus: “An empirical study of speed and communication in globally distributed software development” 
Zimmerman et al. “Mining Version Histories to Guide Software Changes” 
Very important work because:
Nagappan et al.: “Mining Metrics to Predict Component Failures” 
Heitlager et al.: “A Practical Model for Measuring Maintainability” 
Noteworthy findings (at the file level):
Predicting component failures: Hassan  found a connection between process metrics and bugs
Distributed software development: Bird et al.  found that software quality is not affected by distance
No model to rule them all: Zimmerman et al.  established that software projects are different and therefore models need to be localised and specialised.
Naturalness: Hindle et al.  found that “code is very repetitive, and in fact even more so than natural languages”
In the early 10s, the velocity of software production increased at a breakneck rate
GitHub revolutionalized OSS by centralizing it. Anyone can contribute (and contribute they do!).
AppStores made discoverability and distribution to the end client trivial.
The cloud transfored hardware into software.
Software analytics coined as a term to help teams improve their performance
Big Software: GHTorrent (Gousios ) made TBs of GitHub data available to researchers. Inspired TravisTorrent  and SOTorrent 
Big testing: Herzig et al.  developed “a cost model, which dynamically skips tests when the expected cost of running the test exceeds the expected cost of removing it. ”
Big security: Gorla et al.  “after clustering Android apps by their description topics, (we) identified outliers in each cluster with respect to their API usage.”
Code summarization Allamanis et al.  use CNNs to automatically give names to methods based on their contents
Code search Gu et al.  search for code snippers using natural language queries
PR Duplicates: Nijessen  used deep learning to find duplicate PRs
An overview can be seen in this taxonomy.
In this course, we will focus on state of the art research in the areas of:
|||Hassan||[Software Intelligence] offers software practitioners (not just developers) up-to-date and pertinent information to support their daily decision-making processes.|
|||Buse||The idea of analytics is to leverage potentially large amounts of data into real and actionable insights.|
|||Zhang||Software analytics is to enable software practitioners to perform data exploration and analysis in order to obtain insightful and actionable information for data-driven tasks around software and services.|
|||Menzies||Software analytics is analytics on software data for managers and software engineers with the aim of empowering software development individuals and teams to gain and share insight from their data to make better decisions.|
D: So what are software analytics?
The broader goal of software analytics is to extract value from data traces residing in software repositories, in order to assist developers to write better software.