Shuu12121's picture
Upload dataset
bd363ec verified
metadata
license: apache-2.0
language:
  - en
tags:
  - go
  - golang
  - code-search
  - text-to-code
  - code-to-text
  - source-code
  - backend
dataset_info:
  features:
    - name: code
      dtype: string
    - name: docstring
      dtype: string
    - name: func_name
      dtype: string
    - name: language
      dtype: string
    - name: repo
      dtype: string
    - name: path
      dtype: string
    - name: url
      dtype: string
    - name: license
      dtype: string
  splits:
    - name: train
      num_bytes: 6335534562
      num_examples: 8267193
    - name: validation
      num_bytes: 65055538
      num_examples: 90127
    - name: test
      num_bytes: 360052043
      num_examples: 451958
  download_size: 1275923796
  dataset_size: 6760642143
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Go CodeSearch Dataset (Shuu12121/go-treesitter-filtered-datasetsV2)

Dataset Description

This dataset contains Go functions and methods paired with their GoDoc comments, extracted from open-source Go repositories on GitHub. It is formatted similarly to the CodeSearchNet challenge dataset.

Each entry includes:

  • code: The source code of a go function or method.
  • docstring: The docstring or Javadoc associated with the function/method.
  • func_name: The name of the function/method.
  • language: The programming language (always "go").
  • repo: The GitHub repository from which the code was sourced (e.g., "owner/repo").
  • path: The file path within the repository where the function/method is located.
  • url: A direct URL to the function/method's source file on GitHub (approximated to master/main branch).
  • license: The SPDX identifier of the license governing the source repository (e.g., "MIT", "Apache-2.0"). Additional metrics if available (from Lizard tool):
  • ccn: Cyclomatic Complexity Number.
  • params: Number of parameters of the function/method.
  • nloc: Non-commenting lines of code.
  • token_count: Number of tokens in the function/method.

Dataset Structure

The dataset is divided into the following splits:

  • train: 8,267,193 examples
  • validation: 90,127 examples
  • test: 451,958 examples

Data Collection

The data was collected by:

  1. Identifying popular and relevant Go repositories on GitHub.
  2. Cloning these repositories.
  3. Parsing Go files (.go) using tree-sitter to extract functions/methods and their docstrings/Javadoc.
  4. Filtering functions/methods based on code length and presence of a non-empty docstring/Javadoc.
  5. Using the lizard tool to calculate code metrics (CCN, NLOC, params).
  6. Storing the extracted data in JSONL format, including repository and license information.
  7. Splitting the data by repository to ensure no data leakage between train, validation, and test sets.

Intended Use

This dataset can be used for tasks such as:

  • Training and evaluating models for code search (natural language to code).
  • Code summarization / docstring generation (code to natural language).
  • Studies on Go code practices and documentation habits.

Licensing

The code examples within this dataset are sourced from repositories with permissive licenses (typically MIT, Apache-2.0, BSD). Each sample includes its original license information in the license field. The dataset compilation itself is provided under a permissive license (e.g., MIT or CC-BY-SA-4.0), but users should respect the original licenses of the underlying code.

Example Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Shuu12121/go-treesitter-filtered-datasetsV2")

# Access a split (e.g., train)
train_data = dataset["train"]

# Print the first example
print(train_data[0])