zijianhu commited on
Commit
208e4eb
1 Parent(s): 646b8b0

Update README.md

Browse files

Add inference endpoint information

Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -10,7 +10,6 @@ language:
10
  > This model is an instruction tuned model which requires alignment before it can be used in production. We will release
11
  > the chat version soon.
12
 
13
-
14
  Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed
15
  by [TensorOpera AI](https://tensoropera.ai/). The model was pre-trained with a 3-stage data curriculum on 3 trillion
16
  tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and
@@ -22,6 +21,15 @@ was finetuned with 5B tokens of instruction following and multi-turn conversatio
22
  For the full details of this model please read
23
  our [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants).
24
 
 
 
 
 
 
 
 
 
 
25
  ## Benchmarks
26
 
27
  We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot),
 
10
  > This model is an instruction tuned model which requires alignment before it can be used in production. We will release
11
  > the chat version soon.
12
 
 
13
  Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed
14
  by [TensorOpera AI](https://tensoropera.ai/). The model was pre-trained with a 3-stage data curriculum on 3 trillion
15
  tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and
 
21
  For the full details of this model please read
22
  our [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants).
23
 
24
+ ## Getting-Started
25
+
26
+ The model and a live inference endpoint are available on
27
+ the [TensorOpera AI Platform](https://tensoropera.ai/models/1228?owner=tensoropera).
28
+
29
+ For detailed deployment instructions, refer to
30
+ the [Step-by-Step Guide](https://blog.tensoropera.ai/how-to/how-to-deploy-fox-1-on-tensoropera-ai-a-step-by-step-guide-2/)
31
+ on how to deploy Fox-1-Instruct on the [TensorOpera AI Platform](https://tensoropera.ai/).
32
+
33
  ## Benchmarks
34
 
35
  We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot),