Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type string to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2197, in cast_table_to_features
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1948, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type string to null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Data Overview

This repo contains the data of LenghuSky-8. Code of this project is available at https://github.com/ruiyicheng/LenghuSky-8 .

Paper for this dataset is available in https://arxiv.org/abs/2603.16429 .

bkg_mask contains the data related to background mask annotation.

bkg_mask/bkg_change.csv contains the start time (in yyyy-mm-dd-HH-MM-SS) of each background change event in the data. "l" represents lower and "u" represents upper.

bkg_mask/bkg_binary_classification_merged.csv contains time class and probability of the binary classifier for all the image captured after 2023-09-27-18-09-48, where 1 represents the roof is in upper part of the image and 0 represents the roof is in lower part of the image. Results are obtained by code/background_classify

bkg_mask/masks/ contain all the json files that is annotated using labelme. Each json file corresponds to one frame where a background change happens.

bkg_mask/mask_mat/ contains the npy files of background masks for each start time in bkg_change.csv.

bkg_mask/bkg_map.txt provide the map between logits file and background mask npy file in bkg_mask/mask_mat/.

calibration contains the data related to astrometric calibration

yyyy-mm-dd-HH-MM-SS_calibration.json contains the calibration polynomial coefficients for the images captured from the yyyy-mm-dd-HH-MM-SS. Obtained by code/calibration/Jia25_ensemble.py + code/calibration/calibrate_and_save.py .

calibration_index.json contains the pointers to the calibration files for each image timestamp. These data are by code/calibration/calibrate_and_save.py .

images logits contains image for cloud segmentation and the corresponding logits which is obtained by linear probe of DINOv3 local features.

They contains raw sample images captured by the cloud camera at different timestamps. These raw data would not be published due to its huge size (~5TB).

A [mean-1sigma,mean+3sigma] clip and resize to 512*512 is performed on images for cloud segmentation are chopped from the center part of cloud camera, which produce image/. These data are obtained by code/preprocess/preprocess.py. Volumn of this data is ~20GB.

logits contains the corresponding logits for cloud segmentation of each sample image in image/. These data are obtained by code/inference_segmentation_dinov3/inference.py. Volumn of this data is ~40GB, which is available in the github repo.

interrupt.csv contains the data related to interrupt events during data collection.

It contains [start, end) of each interrupt event in the data collection. The time is in yyyy-mm-dd-HH-MM-SS format. These data are obtained by code/preprocess/find_interrupt.py. Some of the interrupt ends with an discontinuity of time, so statistics of interrupt duration can be a bit overestimated.

Downloads last month
138

Paper for ruiyicheng/LenghuSky-8