Any way I can run it on my low-mid tier HP Desktop? specs attached as a .png, btw i know its probably a long shot.
3
#18 opened about 16 hours ago
by
vgrowhouse
How to inference it on a 40 GB A100 and 80 GB Ram of Colab PRO?
1
#17 opened 1 day ago
by
SadeghPouriyan
nvdiallm
#16 opened 3 days ago
by
jhaavinash
Update README.md
#15 opened 3 days ago
by
sabiolobo
Rename README.md to Inquiry Project
#14 opened 3 days ago
by
Patio21
Update README.md
#12 opened 3 days ago
by
Delcos
[EVALS] Metrics compared to 3.1-70b Instruct by Meta
2
#11 opened 5 days ago
by
ID0M
Congrats to the Nvidia team!
#10 opened 5 days ago
by
nickandbro
Will quantised version be available?
3
#9 opened 5 days ago
by
angerhang
Adding Evaluation Results
#8 opened 5 days ago
by
leaderboard-pr-bot
the model is not optimize in term of inference
1
#7 opened 5 days ago
by
Imran1
there are 3 "r"s in the playful "strawrberry"?
5
#6 opened 5 days ago
by
JieYingZhang
405B version
1
#5 opened 6 days ago
by
nonetrix
Other lanuage ablity
2
#4 opened 6 days ago
by
nonetrix
What's the difference between Instruct-HF vs Instruct?
1
#2 opened 6 days ago
by
Backup6
Turn inference ON?
2
#1 opened 6 days ago
by
victor