General description

Software repositories archive valuable software engineering data, such as source code, execution traces, historical code changes, mailing lists, and bug reports. This data contains a wealth of information about a project’s status and history. Doing data science on software repositories, researchers can gain empirically based understanding of software development practices, and practitioners can better manage, maintain, and evolve complex software projects.

In the recent years, the advances in Machine Learning and AI technologies, as demonstrated by the successful application of Deep Neural Networks in various domains did not go unnoticed in the field of Software Engineering. Researchers have applied DNNs to tackle issues such as automated program repair, code summarization, code completion, code structure representation, etc.

IN4334 is a seminar course that aims to give students a deep understanding of and hands-on approach on how deep neural networks and NLP techniques are used to represent knowledge and solve existing SE problems in novel ways.

Learning Objectives

This course will enable students to:

Before you decide to join the course

Course Organization

Please keep in mind that you are attending this course on voluntary basis. Coming to the classroom unprepared will not be the best use of your time, so do your homework first!

The project

Lately, machine learning techniques have been successfully tailored to many software engineering problems. For instance, intelligent code completion helps developers finish their programming tasks faster and more efficiently by decreasing the typing effort, providing type-correct solutions, and enabling them to explore APIs. InCoder, UniXcoder, and CoPilot are among the most recent deep learning-based solutions for an enhanced software development experience. In this project, we aim to tailor pre-trained language models for source code to solve software engineering tasks including code completion, type completion, and code summarization. Each gorup will fine-tune a pre-trained model for the specific task at hand. Then, you will evaluate your model on the provided test set. As for the dataset, you will use the benchmark datasets provided by CodeXGlue, the General Language Understanding Evaluation benchmark for CODE. If you aim to use your models on more languages or data scoures, you should use other publicly available datasets or scrape and proeprocess the new data yourself.

You will implement different ML/DL models. You are required to use Python and more specifically, Pytorch. Check our curated list of tutorials that might help you in getting started with different NLP, DL, and ML topics.

Required reading for week 1:

Contents

Date Week Lecture Reading material Lecturer
6/9 1 1 Course Introduction, How to read a paper in a group, DeepBugs GG
9/9 1 2 Representing source code as text, Naturalness of software GG
13/9 2 3 Large language models and alt representations, Code2Seq GG
16/9 2 4 Graph Neural Networks: Introduction, Learning to represent programs with graphs MA
20/9 3 5 Code Understanding and Generation: CodeT5 MI
23/9 3 6 Code Representation: UniXcoder MI
27/9 4 7 Code Filling: InCoder GG / MI
30/9 4 8 Code summarization: On the Evaluation of Neural Code Summarization GG / MI
4/10 5 9 Type prediction: Type4Py AM / GG
7/10 5 10 Feedback session GG / MI
11/10 6 11 Type prediction: HiTyper AM
14/10 6 12 Reverse engineering Learning to Find Usages of Library Functions in Optimized Binaries AS
18/10 7 13 Software Effort Estimation: Heterogeneous Graph Neural Networks for Software Effort Estimation EK / GG
21/10 7 14 MI / GG
28/10 8 15 Presentation day GG / MI

Lecturers

Guest lecturers

Assistants

Deadlines

Assessment

The course grade will be calculated as:

The final papers will be peer-reviewed by 2 other teams.

Online resources

Here are some resources for extra study, if you are interested in the field:

Bibliography

[1]
M. Pradel and K. Sen, DeepBugs: A learning approach to name-based bug detection,” Proc. ACM Program. Lang., vol. 2, no. OOPSLA, pp. 147:1–147:25, Oct. 2018.