GBI-16-4D / README.md
rithwiks's picture
Update README.md
13d86f5 verified
|
raw
history blame
3.32 kB
metadata
license: cc-by-4.0
pretty_name: SDSS 4d data cubes
tags:
  - astronomy
  - compression
  - images
dataset_info:
  config_name: tiny
  features:
    - name: image
      dtype:
        array4_d:
          shape:
            - 5
            - 800
            - 800
          dtype: uint16
    - name: ra
      dtype: float64
    - name: dec
      dtype: float64
    - name: pixscale
      dtype: float64
    - name: ntimes
      dtype: int64
    - name: nbands
      dtype: int64
  splits:
    - name: train
      num_bytes: 558194176
      num_examples: 2
    - name: test
      num_bytes: 352881364
      num_examples: 1
  download_size: 908845172
  dataset_size: 911075540

GBI-16-4D Dataset

GBI-16-4D is a dataset which is part of the AstroCompress project. It contains data assembled from the Sloan Digital SkySurvey (SDSS). Each FITS file contains a series of 800x800 pixel uint16 observations of the same portion of the Stripe82 field, taken in 5 bandpass filters (u, g, r, i, z) over time. The filenames give the starting run, field, camcol of the observations, the number of filtered images per timestep, and the number of timesteps. For example:

cube_center_run4203_camcol6_f44_35-5-800-800.fits

contains 35 frames of 800x800 pixel images in 5 bandpasses starting with run 4203, field 44, and camcol 6. The images are stored in the FITS standard.

Usage

You first need to install the datasets and astropy packages:

pip install datasets astropy

There are two datasets: tiny and full, each with train and test splits. The tiny dataset has 2 4D images in the train and 1 in the test. The full dataset contains all the images in the data/ directory.

Local Use (RECOMMENDED)

You can clone this repo and use directly without connecting to hf:

git clone https://huggingface.co/datasets/AstroCompress/GBI-16-4D
git lfs pull

Then cd GBI-16-4D and start python like:

from datasets import load_dataset
dataset = load_dataset("./GBI-16-4D.py", "tiny", data_dir="./data/", writer_batch_size=1, trust_remote_code=True)
ds = dataset.with_format("np")

Now you should be able to use the ds variable like:

ds["test"][0]["image"].shape # -> (55, 5, 800, 800)

Note of course that it will take a long time to download and convert the images in the local cache for the full dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.

Use from Huggingface Directly

This method may only be an option when trying to access the "tiny" version of the dataset.

To directly use from this data from Huggingface, you'll want to log in on the command line before starting python:

huggingface-cli login

or

import huggingface_hub
huggingface_hub.login(token=token)

Then in your python script:

from datasets import load_dataset
dataset = load_dataset("AstroCompress/GBI-16-4D", "tiny", writer_batch_size=1, trust_remote_code=True)
ds = dataset.with_format("np")

Demo Colab Notebook

We provide a demo collab notebook to get started on using the dataset here.

Utils scripts

Note that utils scripts such as eval_baselines.py must be run from the parent directory of utils, i.e. `python utls/eval_baselines.py.