Akarshan Biswas
qnixsynapse
AI & ML interests
NLP, models, quantization
Organizations
None yet
qnixsynapse's activity
Is this really an Instruct model?
#1 opened about 1 month ago
by
qnixsynapse
[MODELS] Discussion
380
#372 opened 8 months ago
by
victor
[TOOLS] Community Discussion
27
#455 opened 5 months ago
by
victor
Wrong number of tensors; expected 292, got 291
6
#69 opened 3 months ago
by
KingBadger
[FEATURE] Tools
61
#470 opened 5 months ago
by
victor
Utterly based
1
#9 opened 3 months ago
by
llama-anon
Add IQ Quantization support with the help of imatrix and GPUs
8
#35 opened 7 months ago
by
qnixsynapse
Suggestion: Host Gemma2 using keras_nlp instead of transformers library for the time being
2
#498 opened 4 months ago
by
qnixsynapse
The best 8B in the planet right now. PERIOD!
2
#22 opened 6 months ago
by
cyberneticos
How many active parameters does this model have?
3
#6 opened 6 months ago
by
lewtun
7B or 8B?
4
#24 opened 8 months ago
by
amgadhasan
Which model is responsible for naming of the thread?
8
#402 opened 7 months ago
by
qnixsynapse
Consider adding <start_of_context> and <stop_of_context> or similar special tokens for context ingestion.
#13 opened 7 months ago
by
qnixsynapse
Number of parameters
7
#9 opened 7 months ago
by
HugoLaurencon
RMSNorm eps value is wrong
#20 opened 9 months ago
by
qnixsynapse
RMSNorm eps value is wrong
#19 opened 9 months ago
by
qnixsynapse
Loading the model
3
#3 opened about 1 year ago
by
PyrroAiakid
Looking for GGUF format for this model
1
#14 opened about 1 year ago
by
barha
Help needed to load model
19
#13 opened about 1 year ago
by
sanjay-dev-ds-28
Running Llama-2-7B-32K-Instruct-GGML with llama.cpp ?
13
#1 opened about 1 year ago
by
gsimard
How to convert model into GGML format?
54
#13 opened about 1 year ago
by
zbruceli
gguf files
#22 opened about 1 year ago
by
qnixsynapse
can't load model with llama.cpp commit 519c981f8b65ee6c87c2965539685ced0a17223b
5
#6 opened about 1 year ago
by
md2
Fix the markdown formatting.
1
#1 opened about 1 year ago
by
qnixsynapse