Big Deeper
BigDeeper
AI & ML interests
Differentiable hashing, orthonormal polynomial language modeling, image compression into language representations.
Organizations
None yet
BigDeeper's activity
Minimum required files/models
#60 opened 2 months ago
by
BigDeeper
comfyui does not recognize model files in sft format
5
#18 opened 3 months ago
by
peidong
Are there advantages or disadvantages in changing the format for translation?
3
#10 opened 3 months ago
by
BigDeeper
Does it use a specific chat template?
1
#4 opened 3 months ago
by
BigDeeper
Does it need a specific template?
#12 opened 3 months ago
by
BigDeeper
__ output
1
#19 opened 4 months ago
by
BigDeeper
What does 120B really mean?
3
#1 opened 6 months ago
by
BigDeeper
Does anyone know which specific Python library contains the tokenizer that was used to train Llama-3-70b?
2
#11 opened 6 months ago
by
BigDeeper
15 TeraTokens = 190 Million books
2
#4 opened 6 months ago
by
Languido
I was trying to fine-tune llama3 8b but getting following error - TypeError: LlamaForCausalLM.forward() got an unexpected keyword argument 'decoder_input_ids'
4
#117 opened 6 months ago
by
aniiikket11
Llama-3-70b tokenizer.
3
#116 opened 6 months ago
by
BigDeeper
What does exl2-4bpw-rpcal in the model name mean?
1
#1 opened 6 months ago
by
BigDeeper
Has anyone tried this gguf with agentic framework?
3
#6 opened 6 months ago
by
BigDeeper
Was trying to quantize to 8 bits to reduce VRAM footprint. Got the stuff below.
#3 opened 6 months ago
by
BigDeeper
gguf
30
#24 opened 6 months ago
by
LaferriereJC
I have now tried two quantizations 8_0, and 6_K, they both fail like you see below.
3
#2 opened 6 months ago
by
BigDeeper
Instruct versus non-Instruct
54
#8 opened 6 months ago
by
BigDeeper
Prompting template
#13 opened 7 months ago
by
BigDeeper
tokenizer.model
2
#26 opened 7 months ago
by
BigDeeper
Can't load the model files. The same error whether it is 4 or 8.
1
#24 opened 8 months ago
by
BigDeeper
4 GPUs with 12.2GiB each. Not totally clear from readme files all the steps to be done.
#21 opened 8 months ago
by
BigDeeper
Anyone else seeing similar behavior? I especially like the start "Death, ..." plus some gobblygook.
1
#12 opened 10 months ago
by
BigDeeper
Trying to quantize. Running into the issue below. Any suggestions?
1
#5 opened 11 months ago
by
BigDeeper
Where is tokenizer.model file?
#3 opened 11 months ago
by
BigDeeper
Why isn't "tokenizer.model" file in this repo?
#3 opened 11 months ago
by
BigDeeper
Where are your "tokenizer.model" files for all your models?
#2 opened 11 months ago
by
BigDeeper
invalid magic number 00000000
8
#1 opened about 1 year ago
by
BigDeeper
Open Assistant with Pythia
#1 opened over 1 year ago
by
BigDeeper
Even 330GB of host RAM is not enough
1
#8 opened almost 2 years ago
by
BigDeeper
*.bin files
#7 opened almost 2 years ago
by
BigDeeper