Your feedback on HuggingChat

#1
by victor HF staff - opened
Hugging Chat org
โ€ข
edited Apr 25, 2023

Any constructive feedback is welcome here. Just use the "New Discussion" button! or this link.

^^ pin it? :)

victor pinned discussion
victor changed discussion title from Feedback! to Your feedback on HuggingChat
This comment has been hidden

HuggingChat can only speak poor Chinese. I told him, 'Let's speak in Chinese.' He said, 'Sure,' but then continued to speak in English or with incorrect pinyin. But this is an interesting project.

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Hugging Chat org

Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.

Yes, I answered your post ๐Ÿ‘

this need more prompting than Google's bard or Chatgpt like they understand quickly what I need and also the feel that you are chatting with machine is still there

Sometimes there is no response. Most of the time it finishes half way or less through and answer. I am using it to program dotnet core.

Today, Command R+ was replaced with the 08-2024 version on HuggingChat, and I think this is a great improvement. However, conversations that were using the old version of Command R+ can no longer continue because the original model is no longer available.

Instead of this happening, I would like the ability to switch models or assistants in the middle of a conversation. I understand that the intention is to avoid confusion by not allowing model changes during conversations, but it seems more confusing to invalidate all previous conversations every time the model gets updated.

Additional note: It seems that the issue of not being able to continue conversations has been resolved in #565. Thank you for addressing this! I would be even happier if the ability to switch models was available at any time.

image.png

image.png

image.png

Update:
I found that I could change the model by running the following script in Chrome's Developer Tools console, which solved the problem for the time being.

const xhr = new XMLHttpRequest();
xhr.open("PATCH", "https://huggingface.co/chat/conversation/123456abcdef(Your conversation ID)");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = () => {
};
xhr.send('{"model":"CohereForAI/c4ai-command-r-plus-08-2024"}');

I'm sorry, but CoHere's new Command R+ 08-2024 version is not good. It does not seem to listen to whatever I put into it.

I ask it for a long story, giving it a bit of the detail needed, but it doesn't even give a long enough story that makes you scroll down at all if any. I even try to acknowledge it via caps/sentences when editing the prompt, like if you want to ask for something and then put something like caps or nice sentences in parenthesis to say 'please make it detailed/please make sure ___' but it just ignores it no matter what is put in. I even tried changing stuff like changing the word "Start" to "Create" as the first word, thinking it would give a different type of response. It did not change anything at all, I still gave a short story no matter what even though I asked for a good long story.

What I expect by 'Long Story': I expected a story long enough to give the 'Continue' button when it reaches and stops at that point. It did not even give a story long enough for that button to appear no matter what got changed in any prompt.

Aaaand as of today, CoHere's Command R+ 08-2024 version has started doing the same thing as its predecessor, non-stop loading when trying to respond and such.

Bruh

i guess you could just pause generation

I would really like to see markdown formatting for the users messages aswell. On the LLM Inference Playground this already is the case, but in the actual HuggingChat Interface it is not.
I frequently use markdown formatting for input, as it can make it clearer where for example my code starts and ends, where there is context and where there is my instruction and it allows me to highlight to the model what's important for the given task without screaming in ALLCAPS.
This would be very appreciated.

I'd love to see more data on how much an assistant is used. Right now it says "1-10" or "10+" but what does it mean exactly?

Hugging Chat org

@Turtleback217 that's the number of people who have used your assistant!

Awesome. Thanks. Is there a way to see how many times it's used? If 150 people have click an assistant once or if just 20 people have used 600 times, it tells a widely different story! : D

Sign up or log in to comment