Hugging Face Generative AI Services (HUGS) documentation

Frequently Asked Questions (FAQ)

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Frequently Asked Questions (FAQ)

What is HUGS?

HUGS (Hugging Face Generative AI Services) are optimized, zero-configuration inference microservices designed to simplify and accelerate the development of AI applications with open models. For more details, see our Introduction to HUGS.

Which models are supported by HUGS?

HUGS supports a wide range of open AI models, including LLMs, Multimodal Models, and Embedding Models. For a complete list of supported models, check our Supported Models page.

What hardware is compatible with HUGS?

HUGS is optimized for various hardware accelerators, including NVIDIA GPUs, AMD GPUs, AWS Inferentia, and Google TPUs. For more information, visit our Supported Hardware page.

How do I deploy HUGS?

You can deploy HUGS through various methods, including Docker and Kubernetes. For step-by-step deployment instructions, refer to our Deployment Guide.

Is HUGS available on cloud platforms?

Yes, HUGS is available on major cloud platforms. For specific instructions, check our guides for:

How does HUGS pricing work?

HUGS offers a on-demand pricing based on the uptime of each container. For detailed pricing information, visit our Pricing page.

How do I run inference using HUGS?

To learn how to run inference with HUGS, check our Inference Guide.

What are the key features of HUGS?

HUGS offers several key features, including optimized hardware inference engines, zero-configuration deployment, and industry-standardized APIs. For a complete list of features, see our Introduction to HUGS.

How does HUGS ensure security and compliance?

HUGS allows deployment within your own infrastructure for enhanced security and data control. It also includes necessary licenses and terms of services to minimize compliance risks. For more information, refer to our Security and Compliance section.

Where can I get help or support for HUGS?

If you need assistance or have questions about HUGS, check our Help & Support page for community forums and contact information.

Can I use HUGS with my existing AI applications?

Yes, HUGS is designed to be easily integrated with existing AI applications. It provides industry-standardized APIs and is compatible with popular Open AI models.

How does HUGS compare to other AI services?

HUGS offers unique advantages such as optimization for open models, hardware flexibility, and zero-configuration deployment. For a detailed comparison.

Is HUGS suitable for both small startups and large enterprises?

Yes, HUGS is designed to meet the needs of both small startups and large enterprises. Its flexible deployment options and scalability make it suitable for a wide range of use cases.

< > Update on GitHub