Examples of usage

#14
by ernestyalumni - opened

Other than text completion, as shown with the Model card, what else can you do with the open weights once downloaded and local? Could someone point me to code examples of what could be done?

Meta Llama org

Here are some more details!

Let me know if you have any Qs!

@Sanyam NICE! I wouldn’t have known to look (or search) there.

Also because the gradio code is available for most of the hugging face spaces, the spaces for 1B, 3B Instruct has been useful. :)

Meta Llama org

HuggingFace is awesome! 🙏

The Llama website is the new home for all model details and announcements of future ones ;)

In practise-we hope you use both and refer to Model Cards on our website anytime there is a confusion

@Sanyam I got Llama-3.2-1B-Instruct to run on a GeForce 980 Ti (I'm compute poor). Maybe there's a business case for running Llama-3.2-1B,3B on old, second-hand GPUs keeping them from the landfill! I ended up using Huggingface's AutoModelForCausalLM instance instead of a pipeline in contrast with the Model card example, others in Community discussions were having problems with pipeline.

my code (I'm trying to come up with my own library of wrappers): https://github.com/InServiceOfX/InServiceOfX/blob/master/PythonLibraries/HuggingFace/MoreTransformers/executable_scripts/terminal_only_infinite_loop_instruct.py

Meta Llama org

GeForce 980 Ti (I'm compute poor).

That was the best GPU at some time-it still counts!

Awesome, thanks for sharing-I will take a look at the MC page and also playing with the new models! :)

Sign up or log in to comment