Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
mozzarella / README.md
maxruetz's picture
Update README.md
711a507 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - benchmark
  - code retrieval
  - code generation
  - java
size_categories:
  - 1K<n<10K

Mozzarella-0.3.1

Motivation

  • Mozzarella is a dataset matching issues (= problem statements) and corresponding pull requests (PRs = problem solutions) of a selection of well maintained Java GitHub repositories. The original purpose was to serve as training and evaluation data for ML models concerned with fault localization and automated program repair of complex code bases. However, there might be more use cases that could benefit from this data.
  • Inspired by SWEBench paper (https://arxiv.org/abs/2310.06770) which collected similar data (however on file level only) for Python code bases.

Author

  • Feedback2Code Bachelors Project at Hasso Plattner Institute, Potsdam in cooperation with SAP.

Composition

  • Each instance is called a task and resembles a matching of a GitHub issue and the corresponding fix. Each tasks contains information about the issue/pr (ids, comments...), the problem statement and the solution that was applied by a human developer, including relevant files, relevant methods and the actual changed code.
  • The dataset currently contains 2734 tasks from 8 repositories. For a repository to be included in the dataset it has to be written mostly in Java, have a large amount of issues and pull requests in English, have good test coverage and be published under a permissive license.
  • Included in the dataset are three different train/validate/test splits:
    • Random split: The tasks are randomly split with the proportions 60/20/20
    • Repository split: Instead of splitting the individual tasks, the repositories are allocated to train/validate/test in 60/20/20 proportions and the tasks recieve the same split as the belonging repository
    • Time split: All tasks in the test split were created earlier than tasks in the validation split. All tasks in the train split were created earlier than tasks in the test split. In respect to the belonging repository.

Repositories

  • mockito/mockito (MIT)
  • square/retrofit (Apache 2.0)
  • iluwatar/java-design-patterns (MIT)
  • netty/netty (Apache 2.0)
  • pinpoint-apm/pinpoint (Apache 2.0)
  • kestra-io/kestra (Apache 2.0)
  • provectus/kafka-ui (Apache 2.0)
  • bazelbuild/bazel (Apache 2.0)

Which columns exist?

  • instance_id: (str) - unique identifier for this task/instance. Format: username__reponame-issueid
  • repo: (str) - The repository owner/name identifier from GitHub.
  • issue_id: (str) - A formatted identifier for an issue/problem, usually as repo_owner/repo_name/issue-number.
  • pr_id: (str) - A formatted instance identifier for the corresponding PR/solution, usually as repo_owner/repo_name/PR-number.
  • linking_methods: (list str) - The method used to create this task (eg. timestamp, keyword...). See details below.
  • base_commit: (str) - The commit hash representing the HEAD of the repository before the solution PR is applied.
  • merge_commit: (str) - The commit hash representing the HEAD of the repository after the PR is merged.
  • hints_text: (str) - Comments made on the issue
  • resolved_comments: (str) - Comments made on the PR
  • created_at: (str) - The creation date of the pull request.
  • labeled_as: (list str) - List of labels applied to the issue
  • problem_statement: (str) - The issue title and body.
  • gold_files: (list str) - List of paths to the files not containing tests that were changed in the PR (at the point of the base commit)
  • test_files: (list str) - List of paths to the files containing tests that were changed in the PR (at the point of the base commit)
  • gold_patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. As diff.
  • test_patch: (str) - A test-file patch that was contributed by the solution PR. As diff.
  • split_random: (str) - The random split this tasks belongs to ('train'/'test'/'val'). See details above.
  • split_repo: (str) - The repository split this tasks belongs to ('train'/'test'/'val'). See details above.
  • split_time: (str) - The time split this tasks belongs to ('train'/'test'/'val'). See details above.

Collection Process

  • All data is taken from publicly accesible GitHub repositories under MIT or Apache-2.0 licenses.
  • The data is collected by gathering issue and PR information from the GitHub API. To create a task instance, we attempt for each PR to find the issues that were solved by that PR using all of the following linking methods:
    • connected: GitHub offers a feature to assign to a PR the issues that the PR adresses. These preexisting links are used as link in our dataset.
    • keyword: Each PR is scanned for mentions of issues and each issue is scanned for mentions of PRs. Then the proximity of those matches is checked for certains keywords indicating a solution relationship.
    • timestamp: Possible matches are determined by looking at issues and PR that were closed around the same time. Then their titles and descriptions are checked for semantic similarity using OpenAI embeddings.

Preprocessing

  • From the dataset removed were tasks that modify more than ten files (because we deem them overly complex for our purposes) and tasks that modify no files or only test files.
  • To improve the accuracy of timestamp linking, tasks linked by exclusively timestamp are removed if there are keyword/connection tasks that suggest a different matching or if there are other exclusively timestamp tasks with a higher similarity.

Uses

  • The dataset is currently being used to train and validate models for fault localization at file and method level.
  • The dataset will be used to train and validate models for automatic code generation / bug fixing.
  • Other uses could be possible, however are not yet explored by our project.

Maintenance

  • More repsitories will most likely be added fairly soon. All fields are subject to change depending on what we deem sensible.
  • This is the newest version of the dataset as of 18/07/2024

License

Copyright 2024 Feedback2Code

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.