File size: 2,810 Bytes
2fb4098
 
b553e52
2fb4098
 
 
 
b553e52
 
 
 
8a3abc3
 
 
 
 
 
 
b553e52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a3abc3
b553e52
20a0022
48ee9d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2073c5b
48ee9d4
 
 
 
da1c960
48ee9d4
da1c960
48ee9d4
 
da1c960
48ee9d4
 
 
da1c960
48ee9d4
 
 
 
 
 
6f7b8c4
48ee9d4
 
 
 
 
 
 
 
 
 
da1c960
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: cc-by-4.0
pretty_name: SDSS 4d data cubes
tags:
- astronomy
- compression
- images
dataset_info:
  config_name: tiny
  features:
  - name: image
    dtype:
      array4_d:
        shape:
        - 5
        - 800
        - 800
        dtype: uint16
  - name: ra
    dtype: float64
  - name: dec
    dtype: float64
  - name: pixscale
    dtype: float64
  - name: ntimes
    dtype: int64
  - name: nbands
    dtype: int64
  splits:
  - name: train
    num_bytes: 558194176
    num_examples: 2
  - name: test
    num_bytes: 352881364
    num_examples: 1
  download_size: 908845172
  dataset_size: 911075540
---

# GBI-16-4D Dataset

GBI-16-4D is a dataset which is part of the AstroCompress project. It contains data assembled from the Sloan Digital SkySurvey (SDSS). Each FITS file contains a series of 800x800 pixel uint16 observations of the same portion of the Stripe82 field, taken in 5 bandpass filters (u, g, r, i, z) over time. The filenames give the 
starting run, field, camcol of the observations, the number of filtered images per timestep, and the number of timesteps. For example: 

```cube_center_run4203_camcol6_f44_35-5-800-800.fits```

contains 35 frames of 800x800 pixel images in 5 bandpasses starting with run 4203, field 44, and camcol 6. The images are stored in the FITS standard.

# Usage

You first need to install the `datasets` and `astropy` packages:

```bash
pip install datasets astropy
```

There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.

## Local Use (RECOMMENDED)

Alternatively, you can clone this repo and use directly without connecting to hf:

```bash
git clone https://huggingface.co/datasets/AstroCompress/GBI-16-4D
```

```bash
git lfs pull
```

Then `cd GBI-16-4D` and start python like:

```python
from datasets import load_dataset
dataset = load_dataset("./GBI-16-4D.py", "tiny", data_dir="./data/")
ds = dataset.with_format("np")
```

Now you should be able to use the `ds` variable like:

```python
ds["test"][0]["image"].shape # -> (55, 5, 800, 800)
```

Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.


## Use from Huggingface Directly

To directly use from this data from Huggingface, you'll want to log in on the command line before starting python:

```bash
huggingface-cli login
```

or

```
import huggingface_hub
huggingface_hub.login(token=token)
```

Then in your python script:

```python
from datasets import load_dataset
dataset = load_dataset("AstroCompress/GBI-16-4D", "tiny")
ds = dataset.with_format("np")
```