ko
mirek190
AI & ML interests
None yet
Organizations
None yet
mirek190's activity
Possibly the provided prompt format is wrong.
12
#1 opened about 1 month ago
by
vevi33
prompt template is wrong
9
#2 opened 2 months ago
by
mirek190
Can you add Q4K_m please ?
1
#19 opened 2 months ago
by
mirek190
gguf quant please!!
7
#3 opened 3 months ago
by
gopi87
surprising - that model is incredible for it's size.
20
#2 opened 3 months ago
by
mirek190
Performance
36
#1 opened 3 months ago
by
urtuuuu
The first GGUF that works with long context on llama.cpp!
3
#1 opened 3 months ago
by
ubergarm
llamacpp prompt template is wrong
#1 opened 3 months ago
by
mirek190
And where is the GGUF file itself?
12
#1 opened 3 months ago
by
Anonimus12345678902
Gemma-2-9B-it scores
8
#843 opened 3 months ago
by
saishf
GGGGGGGGGGGGGGGGGGGGGGGGGGGG
9
#2 opened 3 months ago
by
x4k
whats this model do ? uncensored version ?
5
#1 opened 3 months ago
by
gopi87
For size of 9b and coding that model is IMPRESSIVE. How can be so good?
11
#1 opened 3 months ago
by
mirek190
That 9b modlel for coding is INSANE good. How is that even possible?
#4 opened 3 months ago
by
mirek190
prompt template?
3
#1 opened 4 months ago
by
mirek190
Hi - are you going add new llama 70b version as well?
4
#1 opened 6 months ago
by
mirek190
OK llama 3 8b model is INSANE. Is almost as good as wizard 2 8x22b!
52
#5 opened 6 months ago
by
mirek190
template for llamacpp
1
#4 opened 6 months ago
by
mirek190
Template for llama3 using llamacpp
5
#7 opened 6 months ago
by
mirek190
That wizardlm 8x22b is very advanced.!
18
#8 opened 6 months ago
by
mirek190
Yeap
2
#1 opened 7 months ago
by
deleted
llama-cpp failed
9
#3 opened 8 months ago
by
gptwin
Code LLaMA 70b run locally on my pc .... is bad.
13
#2 opened 9 months ago
by
mirek190
WOW - best opensource llm I ever seen !
60
#1 opened 11 months ago
by
mirek190
GPTQ model is available
6
#1 opened 9 months ago
by
MaziyarPanahi
New Leader!
6
#2 opened 9 months ago
by
DKRacingFan
WOW ..just wow
19
#1 opened 10 months ago
by
mirek190
gguf weight ?
5
#2 opened 9 months ago
by
wukongai
Update or delete
3
#48 opened 10 months ago
by
rombodawg
Reasoning this model id worse that original mixtral-8x7b-instruct model.
2
#4 opened 10 months ago
by
mirek190
Do not even try to download lower that 8 bit ;P
4
#2 opened 10 months ago
by
mirek190
How many tokens per second?
12
#9 opened 11 months ago
by
Hoioi
Wait? 4x13b model?
3
#1 opened 10 months ago
by
mirek190
Why is the response slower than the 70B model?
7
#9 opened 10 months ago
by
shalene
How do you get the reported Arc score of 85.8?
4
#3 opened 10 months ago
by
deleted
chatbot giving weird responses
20
#6 opened 11 months ago
by
hammad93
Great model
2
#1 opened 11 months ago
by
dranger003
Issue with Mixtral-8x7B-Instruct-v0.1-GGUF Model: 'blk.0.ffn_gate.weight' Tensor Not Found
3
#2 opened 11 months ago
by
littleworth
Hello - prompt template ?
1
#1 opened 10 months ago
by
mirek190
MoE mixtral verion? ..That is fast ;)
#1 opened 10 months ago
by
mirek190