The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'impl_file', 'test_file'})

This happened while the json dataset builder was generating data using

hf://datasets/AdityaNarayan/HS-Repo-Curriculum-Learning/curriculum_learning_unbroken/phase1_foundation.jsonl (at revision f987110b1822515ffcaab383497c7d95b82d3c97)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              type: string
              path: string
              size_bytes: int64
              training_content: string
              test_file: string
              impl_file: string
              to
              {'type': Value('string'), 'path': Value('string'), 'size_bytes': Value('int64'), 'training_content': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'impl_file', 'test_file'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/AdityaNarayan/HS-Repo-Curriculum-Learning/curriculum_learning_unbroken/phase1_foundation.jsonl (at revision f987110b1822515ffcaab383497c7d95b82d3c97)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

type
string
path
string
size_bytes
int64
training_content
string
file
Cargo.toml
1,382
// File: Cargo.toml [workspace] resolver = "2" members = ["crates/*"] package.edition = "2021" package.rust-version = "1.85.0" package.license = "Apache-2.0" [workspace.dependencies] tracing = { version = "0.1.41" } # Most of the lint configuration is based on https://github.com/EmbarkStudios/rust-ecosystem/blob/main/lints.toml [workspace.lints.rust] unsafe_code = "forbid" rust_2018_idioms = { level = "warn", priority = -1 } # Remove priority once https://github.com/rust-lang/rust-clippy/pull/12827 is available in stable clippy unused_qualifications = "warn" # missing_debug_implementations = "warn" # missing_docs = "warn" [workspace.lints.clippy] as_conversions = "warn" cloned_instead_of_copied = "warn" dbg_macro = "warn" expect_used = "warn" fn_params_excessive_bools = "warn" index_refutable_slice = "warn" indexing_slicing = "warn" large_futures = "warn" missing_panics_doc = "warn" mod_module_files = "warn" out_of_bounds_indexing = "warn" panic = "warn" panic_in_result_fn = "warn" panicking_unwrap = "warn" print_stderr = "warn" print_stdout = "warn" todo = "warn" trivially_copy_pass_by_ref = "warn" unimplemented = "warn" unnecessary_self_imports = "warn" unreachable = "warn" unwrap_in_result = "warn" unwrap_used = "warn" use_self = "warn" wildcard_dependencies = "warn" # Lints to allow option_map_unit_fn = "allow" [profile.release] strip = true lto = true codegen-units = 1
file
docker-compose-development.yml
8,700
// File: docker-compose-development.yml version: "3.8" volumes: cargo_cache: pg_data: router_build_cache: scheduler_build_cache: drainer_build_cache: redisinsight_store: networks: router_net: services: ### Dependencies pg: image: docker.io/postgres:latest ports: - "5432:5432" networks: - router_net volumes: - pg_data:/var/lib/postgresql environment: - POSTGRES_USER=db_user - POSTGRES_PASSWORD=db_pass - POSTGRES_DB=hyperswitch_db healthcheck: test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"] interval: 5s retries: 3 start_period: 5s timeout: 5s redis-standalone: image: docker.io/redis:7 networks: - router_net ports: - "6379:6379" healthcheck: test: ["CMD-SHELL", "redis-cli ping | grep '^PONG$'"] interval: 5s retries: 3 start_period: 5s timeout: 5s migration_runner: image: docker.io/debian:trixie-slim pull_policy: always command: > bash -c " apt-get update && apt-get install -y curl xz-utils && curl --proto '=https' --tlsv1.2 -LsSf https://github.com/diesel-rs/diesel/releases/latest/download/diesel_cli-installer.sh | bash && curl --proto '=https' --tlsv1.2 -sSf https://just.systems/install.sh | bash -s -- --to /usr/local/bin && export PATH="$${PATH}:$${HOME}/.cargo/bin" && just migrate" working_dir: /app networks: - router_net volumes: - ./:/app environment: # format -> postgresql://DB_USER:DB_PASSWORD@HOST:PORT/DATABASE_NAME - DATABASE_URL=postgresql://db_user:db_pass@pg:5432/hyperswitch_db superposition: image: ghcr.io/juspay/superposition-demo:latest ports: - "8081:8080" networks: - router_net environment: - API_HOSTNAME=http://localhost:8081 profiles: - superposition ### Application services hyperswitch-server: build: dockerfile_inline: | FROM rust:latest RUN apt-get update && \ apt-get install -y protobuf-compiler RUN rustup component add rustfmt clippy command: cargo run --bin router -- -f ./config/docker_compose.toml working_dir: /app ports: - "8080:8080" networks: - router_net volumes: - ./:/app - cargo_cache:/cargo_cache - router_build_cache:/cargo_build_cache environment: - CARGO_HOME=/cargo_cache - CARGO_TARGET_DIR=/cargo_build_cache depends_on: pg: condition: service_healthy redis-standalone: condition: service_healthy labels: logs: "promtail" healthcheck: test: curl --fail http://localhost:8080/health || exit 1 interval: 120s retries: 4 start_period: 20s timeout: 10s hyperswitch-producer: image: docker.io/rust:latest command: cargo run --bin scheduler -- -f ./config/docker_compose.toml working_dir: /app networks: - router_net profiles: - scheduler volumes: - ./:/app - cargo_cache:/cargo_cache - scheduler_build_cache:/cargo_build_cache environment: - CARGO_HOME=/cargo_cache - CARGO_TARGET_DIR=/cargo_build_cache - SCHEDULER_FLOW=producer depends_on: hyperswitch-consumer: condition: service_healthy labels: logs: "promtail" hyperswitch-consumer: image: docker.io/rust:latest command: cargo run --bin scheduler -- -f ./config/docker_compose.toml working_dir: /app networks: - router_net profiles: - scheduler volumes: - ./:/app - cargo_cache:/cargo_cache - scheduler_build_cache:/cargo_build_cache environment: - CARGO_HOME=/cargo_cache - CARGO_TARGET_DIR=/cargo_build_cache - SCHEDULER_FLOW=consumer depends_on: hyperswitch-server: condition: service_started labels: logs: "promtail" healthcheck: test: (ps -e | grep scheduler) || exit 1 interval: 120s retries: 4 start_period: 30s timeout: 10s hyperswitch-drainer: image: docker.io/rust:latest command: cargo run --bin drainer -- -f ./config/docker_compose.toml working_dir: /app deploy: replicas: ${DRAINER_INSTANCE_COUNT:-1} networks: - router_net profiles: - full_kv volumes: - ./:/app - cargo_cache:/cargo_cache - drainer_build_cache:/cargo_build_cache environment: - CARGO_HOME=/cargo_cache - CARGO_TARGET_DIR=/cargo_build_cache restart: unless-stopped depends_on: hyperswitch-server: condition: service_started labels: logs: "promtail" ### Clustered Redis setup redis-cluster: image: docker.io/redis:7 deploy: replicas: ${REDIS_CLUSTER_COUNT:-3} command: redis-server /usr/local/etc/redis/redis.conf profiles: - clustered_redis volumes: - ./config/redis.conf:/usr/local/etc/redis/redis.conf networks: - router_net ports: - "6379" - "16379" redis-init: image: docker.io/redis:7 profiles: - clustered_redis depends_on: - redis-cluster networks: - router_net command: "bash -c 'export COUNT=${REDIS_CLUSTER_COUNT:-3} \ if [ $$COUNT -lt 3 ] \ then \ echo \"Minimum 3 nodes are needed for redis cluster\" \ exit 1 \ fi \ HOSTS=\"\" \ for ((c=1; c<=$$COUNT;c++)) \ do \ NODE=$COMPOSE_PROJECT_NAME-redis-cluster-$$c:6379 \ echo $$NODE \ HOSTS=\"$$HOSTS $$NODE\" \ done \ echo Creating a cluster with $$HOSTS \ redis-cli --cluster create $$HOSTS --cluster-yes \ '" ### Monitoring grafana: image: docker.io/grafana/grafana:latest ports: - "3000:3000" networks: - router_net profiles: - monitoring restart: unless-stopped environment: - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_BASIC_ENABLED=false volumes: - ./config/grafana.ini:/etc/grafana/grafana.ini - ./config/grafana-datasource.yaml:/etc/grafana/provisioning/datasources/datasource.yml promtail: image: docker.io/grafana/promtail:latest volumes: - ./logs:/var/log/router - ./config:/etc/promtail - /var/run/docker.sock:/var/run/docker.sock command: -config.file=/etc/promtail/promtail.yaml profiles: - monitoring networks: - router_net loki: image: docker.io/grafana/loki:latest ports: - "3100" command: -config.file=/etc/loki/loki.yaml networks: - router_net profiles: - monitoring volumes: - ./config:/etc/loki otel-collector: image: docker.io/otel/opentelemetry-collector-contrib:latest command: --config=/etc/otel-collector.yaml networks: - router_net profiles: - monitoring volumes: - ./config/otel-collector.yaml:/etc/otel-collector.yaml ports: - "4317" - "8888" - "8889" prometheus: image: docker.io/prom/prometheus:latest networks: - router_net profiles: - monitoring volumes: - ./config/prometheus.yaml:/etc/prometheus/prometheus.yml ports: - "9090" restart: unless-stopped tempo: image: docker.io/grafana/tempo:latest command: -config.file=/etc/tempo.yaml volumes: - ./config/tempo.yaml:/etc/tempo.yaml networks: - router_net profiles: - monitoring ports: - "3200" # tempo - "4317" # otlp grpc restart: unless-stopped redis-insight: image: docker.io/redislabs/redisinsight:latest networks: - router_net profiles: - full_kv ports: - "8001:8001" volumes: - redisinsight_store:/db hyperswitch-control-center: image: docker.juspay.io/juspaydotin/hyperswitch-control-center:latest pull_policy: always networks: - router_net ports: - "9000:9000" environment: - configPath=/tmp/dashboard-config.toml volumes: - ./config/dashboard.toml:/tmp/dashboard-config.toml hyperswitch-web-sdk: build: dockerfile_inline: | FROM node:lts RUN git clone https://github.com/juspay/hyperswitch-web.git --depth 1 WORKDIR hyperswitch-web RUN npm i --force command: bash -c 'npm run re:build && npx run webpack serve --config webpack.dev.js --host 0.0.0.0' ports: - "9050:9050" environment: sdkEnv: local envSdkUrl: http://localhost:9050 envBackendUrl: http://localhost:8080 envLoggingUrl: http://localhost:8207
file
CHANGELOG.md
1,165,830
"// File: CHANGELOG.md\n\n# Changelog\n\nAll notable changes to HyperSwitch will be documented here.(...TRUNCATED)
file
diesel_v2.toml
324
"// File: diesel_v2.toml\n\n# For documentation on how to configure this file,\n# see diesel.rs/guid(...TRUNCATED)
file
Cargo.lock
270,680
"// File: Cargo.lock\n\n# This file is automatically @generated by Cargo.\n# It is not intended for (...TRUNCATED)
file
cog.toml
859
"// File: cog.toml\n\ntag_prefix = \"v\"\nignore_merge_commits = true\n\n# the HTML comments (`<!-- (...TRUNCATED)
file
.deepsource.toml
190
"// File: .deepsource.toml\n\nversion = 1\n\n[[analyzers]]\nname = \"docker\"\nenabled = true\n\n[[a(...TRUNCATED)
file
README.md
10,857
"// File: README.md\n\n<p align=\"center\">\n <img src=\"./docs/imgs/hyperswitch-logo-dark.svg#gh-d(...TRUNCATED)
file
.clippy.toml
140
"// File: .clippy.toml\n\nallow-dbg-in-tests = true\nallow-expect-in-tests = true\nallow-panic-in-te(...TRUNCATED)
file
package-lock.json
47,917
"// File: package-lock.json\n\n{\n \"name\": \"hyperswitch\",\n \"version\": \"0.0.0\",\n \"lockf(...TRUNCATED)
End of preview.