jeiku commited on
Commit
c7fa202
1 Parent(s): 1f526a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -28
README.md CHANGED
@@ -3,34 +3,14 @@ base_model:
3
  - ChaoticNeutrals/InfinityNexus_9B
4
  - jeiku/luna_lora_9B
5
  library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
 
10
  ---
11
- # 9B
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [ChaoticNeutrals/InfinityNexus_9B](https://huggingface.co/ChaoticNeutrals/InfinityNexus_9B) + [jeiku/luna_lora_9B](https://huggingface.co/jeiku/luna_lora_9B)
24
-
25
- ### Configuration
26
 
27
- The following YAML configuration was used to produce this model:
28
 
29
- ```yaml
30
- models:
31
- - model: ChaoticNeutrals/InfinityNexus_9B+jeiku/luna_lora_9B
32
- parameters:
33
- weight: 1.0
34
- merge_method: linear
35
- dtype: bfloat16
36
- ```
 
3
  - ChaoticNeutrals/InfinityNexus_9B
4
  - jeiku/luna_lora_9B
5
  library_name: transformers
6
+ license: apache-2.0
7
+ datasets:
8
+ - ResplendentAI/Luna_Alpaca
9
+ language:
10
+ - en
11
  ---
12
+ # Garbage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/EX9x2T18il0IKsqP6hKUy.png)
15
 
16
+ This is a finetune of InfinityNexus_9B. This is my first time tuning a frankenmerge, so hopefully it works out. The goal is to improve intelligence and RP ability beyond the 7B original models.