eos not stop and loop

#2
by alli33 - opened

Hi @mudler ,

I hope this message finds you well.

I have been using this model and encountered an issue where it doesn't stop at token 128001 <|im_end|> or '<|end_of_text|>', '<|eot_id|>'. When used with exllama2 or lmdeploy, it randomly loops until the max_token limit is reached. However, when running the model with llama.cpp + localai, it works without issues, although the performance is noticeably slower (approximately three times slower). I suspect that the problem might still be present but is being masked by some post-processing.

Do you have any ideas on what might be causing this behavior or how it can be resolved or a new version / model ?

Thank you for your assistance.

Best regards,

Alexandre

Hi @mudler ,

I wanted to update you on the progress I've made with the template for lmdeploy. I managed to create a configuration that is somewhat aligned with the desired behavior:
I try to migrate:

  function: |-
    <|begin_of_text|><|start_header_id|>system<|end_header_id|>
    {{$tools:=""}}
    You have access to the following tools:
    {{range .Functions -}}
    > Tool Name: {{.Name}}
    {{ $tools = print $tools .Name " " -}}
    Tool Description: {{.Description}}
    Tool Args:
    {{ range $key,$val:= (index .Parameters "properties") -}}
      - {{$key}} ({{ index $val "type"}}): {{index $val "description" }}
    {{ end -}}
    {{ end -}}Answer only in JSON by using the following format if using a tool:
    {"name": "tool_name", "arguments": { "arg_1": "value" } }
    Function must be one of [{{$tools}}]).<|eot_id|>
    {{.Input}}
    <|start_header_id|>assistant<|end_header_id|>

to LMDeploy chat_template json:

{
    "model_name": "internlm2",
    "system": "<|start_header_id|>system\n",
    "meta_instruction": "You are a robot developed by LMDeploy.",
    "eosys": "<|eot_id|>\n",
    "user": "<|start_header_id|>user\n",
    "eoh": "<|start_header_id|>\n",
    "assistant": "<|start_header_id|>assistant\n",
    "eoa": "<|end_of_text|>",
    "separator": "\n",
    "capability": "chat",
    "stop_words": ["<|begin_of_text|>", "<|end_of_text|>", "<|im_end|>","<|eot_id|>"]
}

While this is a step forward, I still have some concerns regarding the behavior, particularly with the stop words configuration. The model seems to be running more smoothly, but I'm open to any suggestions you might have to further optimize or address any potential hidden issues.
I have sometimes some errors due to begin_of_text probably, if you have some advice regarding the migration...

the lmdesploy performance is 1,6 s instead of 13s with localai llama.cpp.

Thank you for your assistance.

Best regards,

Alexandre

Sign up or log in to comment