status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
langchain/llms/openai.py
response = "" params["stream"] = True async for stream_resp in await acompletion_with_retry( self, messages=messages, **params ): token = stream_resp["choices"][0]["delta"].get("content", "") response += token if run_manager: await run_manager.on_llm_new_token( token, ) return LLMResult( generations=[[Generation(text=response)]], ) else: full_response = await acompletion_with_retry( self, messages=messages, **params ) llm_output = { "token_usage": full_response["usage"], "model_name": self.model_name, } return LLMResult( generations=[ [Generation(text=full_response["choices"][0]["message"]["content"])] ], llm_output=llm_output, ) @property def _identifying_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
langchain/llms/openai.py
"""Get the identifying parameters.""" return {**{"model_name": self.model_name}, **self._default_params} @property def _llm_type(self) -> str: """Return type of llm.""" return "openai-chat" def get_num_tokens(self, text: str) -> int: """Calculate num tokens with tiktoken package.""" # ti if sys.version_info[1] < 8: return super().get_num_tokens(text) try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to calculate get_num_tokens. " "Please install it with `pip install tiktoken`." ) # create enc = tiktoken.encoding_for_model("gpt-3.5-turbo") # encode tokenized_text = enc.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, ) # ca return len(tokenized_text)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/chat_models/test_openai.py
"""Test ChatOpenAI wrapper.""" import pytest from langchain.callbacks.manager import CallbackManager from langchain.chat_models.openai import ChatOpenAI from langchain.schema import ( BaseMessage, ChatGeneration, ChatResult, HumanMessage, LLMResult, SystemMessage, ) from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler def test_chat_openai() -> None: """Test ChatOpenAI wrapper.""" chat = ChatOpenAI(max_tokens=10) message = HumanMessage(content="Hello") response = chat([message]) assert isinstance(response, BaseMessage) assert isinstance(response.content, str) def test_chat_openai_system_message() -> None: """Test ChatOpenAI wrapper with system message.""" chat = ChatOpenAI(max_tokens=10) system_message = SystemMessage(content="You are to chat with the user.") human_message = HumanMessage(content="Hello") response = chat([system_message, human_message]) assert isinstance(response, BaseMessage) assert isinstance(response.content, str) def test_chat_openai_generate() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/chat_models/test_openai.py
"""Test ChatOpenAI wrapper with generate.""" chat = ChatOpenAI(max_tokens=10, n=2) message = HumanMessage(content="Hello") response = chat.generate([[message], [message]]) assert isinstance(response, LLMResult) assert len(response.generations) == 2 for generations in response.generations: assert len(generations) == 2 for generation in generations: assert isinstance(generation, ChatGeneration) assert isinstance(generation.text, str) assert generation.text == generation.message.content def test_chat_openai_multiple_completions() -> None: """Test ChatOpenAI wrapper with multiple completions.""" chat = ChatOpenAI(max_tokens=10, n=5) message = HumanMessage(content="Hello") response = chat._generate([message]) assert isinstance(response, ChatResult) assert len(response.generations) == 5 for generation in response.generations: assert isinstance(generation.message, BaseMessage) assert isinstance(generation.message.content, str) def test_chat_openai_streaming() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/chat_models/test_openai.py
"""Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) chat = ChatOpenAI( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) message = HumanMessage(content="Hello") response = chat([message]) assert callback_handler.llm_streams > 0 assert isinstance(response, BaseMessage) def test_chat_openai_llm_output_contains_model_name() -> None: """Test llm_output contains model_name.""" chat = ChatOpenAI(max_tokens=10) message = HumanMessage(content="Hello") llm_result = chat.generate([[message]]) assert llm_result.llm_output is not None assert llm_result.llm_output["model_name"] == chat.model_name def test_chat_openai_streaming_llm_output_contains_model_name() -> None: """Test llm_output contains model_name.""" chat = ChatOpenAI(max_tokens=10, streaming=True) message = HumanMessage(content="Hello") llm_result = chat.generate([[message]]) assert llm_result.llm_output is not None assert llm_result.llm_output["model_name"] == chat.model_name def test_chat_openai_invalid_streaming_params() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/chat_models/test_openai.py
"""Test that streaming correctly invokes on_llm_new_token callback.""" with pytest.raises(ValueError): ChatOpenAI( max_tokens=10, streaming=True, temperature=0, n=5, ) @pytest.mark.asyncio async def test_async_chat_openai() -> None: """Test async generation.""" chat = ChatOpenAI(max_tokens=10, n=2) message = HumanMessage(content="Hello") response = await chat.agenerate([[message], [message]]) assert isinstance(response, LLMResult) assert len(response.generations) == 2 for generations in response.generations: assert len(generations) == 2 for generation in generations: assert isinstance(generation, ChatGeneration) assert isinstance(generation.text, str) assert generation.text == generation.message.content @pytest.mark.asyncio async def test_async_chat_openai_streaming() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/chat_models/test_openai.py
"""Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) chat = ChatOpenAI( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) message = HumanMessage(content="Hello") response = await chat.agenerate([[message], [message]]) assert callback_handler.llm_streams > 0 assert isinstance(response, LLMResult) assert len(response.generations) == 2 for generations in response.generations: assert len(generations) == 1 for generation in generations: assert isinstance(generation, ChatGeneration) assert isinstance(generation.text, str) assert generation.text == generation.message.content
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test OpenAI API wrapper.""" from pathlib import Path from typing import Generator import pytest from langchain.callbacks.manager import CallbackManager from langchain.llms.loading import load_llm from langchain.llms.openai import OpenAI, OpenAIChat from langchain.schema import LLMResult from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler def test_openai_call() -> None: """Test valid call to openai.""" llm = OpenAI(max_tokens=10) output = llm("Say foo:") assert isinstance(output, str) def test_openai_extra_kwargs() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test extra kwargs to openai.""" llm = OpenAI(foo=3, max_tokens=10) assert llm.max_tokens == 10 assert llm.model_kwargs == {"foo": 3} llm = OpenAI(foo=3, model_kwargs={"bar": 2}) assert llm.model_kwargs == {"foo": 3, "bar": 2} with pytest.raises(ValueError): OpenAI(foo=3, model_kwargs={"foo": 2}) def test_openai_llm_output_contains_model_name() -> None: """Test llm_output contains model_name.""" llm = OpenAI(max_tokens=10) llm_result = llm.generate(["Hello, how are you?"]) assert llm_result.llm_output is not None assert llm_result.llm_output["model_name"] == llm.model_name def test_openai_stop_valid() -> None: """Test openai stop logic on valid configuration.""" query = "write an ordered list of five items" first_llm = OpenAI(stop="3", temperature=0) first_output = first_llm(query) second_llm = OpenAI(temperature=0) second_output = second_llm(query, stop=["3"]) assert first_output == second_output def test_openai_stop_error() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test openai stop logic on bad configuration.""" llm = OpenAI(stop="3", temperature=0) with pytest.raises(ValueError): llm("write an ordered list of five items", stop=["\n"]) def test_saving_loading_llm(tmp_path: Path) -> None: """Test saving/loading an OpenAI LLM.""" llm = OpenAI(max_tokens=10) llm.save(file_path=tmp_path / "openai.yaml") loaded_llm = load_llm(tmp_path / "openai.yaml") assert loaded_llm == llm def test_openai_streaming() -> None: """Test streaming tokens from OpenAI.""" llm = OpenAI(max_tokens=10) generator = llm.stream("I'm Pickle Rick") assert isinstance(generator, Generator) for token in generator: assert isinstance(token["choices"][0]["text"], str) def test_openai_streaming_error() -> None: """Test error handling in stream.""" llm = OpenAI(best_of=2) with pytest.raises(ValueError): llm.stream("I'm Pickle Rick") def test_openai_streaming_best_of_error() -> None: """Test validation for streaming fails if best_of is not 1.""" with pytest.raises(ValueError): OpenAI(best_of=2, streaming=True) def test_openai_streaming_n_error() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test validation for streaming fails if n is not 1.""" with pytest.raises(ValueError): OpenAI(n=2, streaming=True) def test_openai_streaming_multiple_prompts_error() -> None: """Test validation for streaming fails if multiple prompts are given.""" with pytest.raises(ValueError): OpenAI(streaming=True).generate(["I'm Pickle Rick", "I'm Pickle Rick"]) def test_openai_streaming_call() -> None: """Test valid call to openai.""" llm = OpenAI(max_tokens=10, streaming=True) output = llm("Say foo:") assert isinstance(output, str) def test_openai_streaming_callback() -> None: """Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) llm = OpenAI( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) llm("Write me a sentence with 100 words.") assert callback_handler.llm_streams == 10 @pytest.mark.asyncio async def test_openai_async_generate() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test async generation.""" llm = OpenAI(max_tokens=10) output = await llm.agenerate(["Hello, how are you?"]) assert isinstance(output, LLMResult) @pytest.mark.asyncio async def test_openai_async_streaming_callback() -> None: """Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) llm = OpenAI( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) result = await llm.agenerate(["Write me a sentence with 100 words."]) assert callback_handler.llm_streams == 10 assert isinstance(result, LLMResult) def test_openai_chat_wrong_class() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test OpenAIChat with wrong class still works.""" llm = OpenAI(model_name="gpt-3.5-turbo") output = llm("Say foo:") assert isinstance(output, str) def test_openai_chat() -> None: """Test OpenAIChat.""" llm = OpenAIChat(max_tokens=10) output = llm("Say foo:") assert isinstance(output, str) def test_openai_chat_streaming() -> None: """Test OpenAIChat with streaming option.""" llm = OpenAIChat(max_tokens=10, streaming=True) output = llm("Say foo:") assert isinstance(output, str) def test_openai_chat_streaming_callback() -> None: """Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) llm = OpenAIChat( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) llm("Write me a sentence with 100 words.") assert callback_handler.llm_streams != 0 @pytest.mark.asyncio async def test_openai_chat_async_generate() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,331
Issue: Model and model_name inconsistency in OpenAI LLMs such as ChatOpenAI
### Issue you'd like to raise. Argument `model_name` is the standard way of defining a model in LangChain's [ChatOpenAI](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L115). However, OpenAI uses `model` in their own [API](https://platform.openai.com/docs/api-reference/completions/create). To handle this discrepancy, LangChain transforms `model_name` into `model` [here](https://github.com/hwchase17/langchain/blob/65c95f9fb2b86cf3281f2f3939b37e71f048f741/langchain/chat_models/openai.py#L202). The problem is that, if you ignore model_name and use model in the LLM instantiation e.g. `ChatOpenAI(model=...)`, it still works! It works because model becomes part of `model_kwargs`, which takes precedence over the default `model_name` (which would be "gpt-3.5-turbo"). This leads to an inconsistency: the `model` can be anything (e.g. "gpt-4-0314"), but `model_name` will be the default value. This inconsistency won't cause any direct issue but can be problematic when you're trying to understand what models are actually being called and used. I'm raising this issue because I lost a couple of hours myself trying to understand what was happening. ### Suggestion: There are three ways to solve it: 1. Raise an error or warning if model is used as an argument and suggest using model_name instead 2. Raise a warning if model is defined differently from model_name 3. Change from model_name to model to make it consistent with OpenAI's API I think (3) is unfeasible due to the breaking change, but raising a warning seems low effort and safe enough.
https://github.com/langchain-ai/langchain/issues/4331
https://github.com/langchain-ai/langchain/pull/4366
02ebb15c4a92a23818c2c17486bdaf9f590dc6a5
ba0057c07712e5e725c7c5e14c02d223783b183c
"2023-05-08T10:49:23Z"
python
"2023-05-08T23:37:34Z"
tests/integration_tests/llms/test_openai.py
"""Test async chat.""" llm = OpenAIChat(max_tokens=10) output = await llm.agenerate(["Hello, how are you?"]) assert isinstance(output, LLMResult) @pytest.mark.asyncio async def test_openai_chat_async_streaming_callback() -> None: """Test that streaming correctly invokes on_llm_new_token callback.""" callback_handler = FakeCallbackHandler() callback_manager = CallbackManager([callback_handler]) llm = OpenAIChat( max_tokens=10, streaming=True, temperature=0, callback_manager=callback_manager, verbose=True, ) result = await llm.agenerate(["Write me a sentence with 100 words."]) assert callback_handler.llm_streams != 0 assert isinstance(result, LLMResult) def test_openai_modelname_to_contextsize_valid() -> None: """Test model name to context size on a valid model.""" assert OpenAI().modelname_to_contextsize("davinci") == 2049 def test_openai_modelname_to_contextsize_invalid() -> None: """Test model name to context size on an invalid model.""" with pytest.raises(ValueError): OpenAI().modelname_to_contextsize("foobar")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,153
WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info langchain 0.0.158 Mac OS M1 Python 3.11 ### Who can help? @ey ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Use 'Export Chat' feature on WhatsApp. 2. Observe this format for the txt file ``` [11/8/21, 9:41:32 AM] User name: Message text ``` The regular expression used by WhatsAppChatLoader doesn't parse this format successfully ### Expected behavior Parsing fails
https://github.com/langchain-ai/langchain/issues/4153
https://github.com/langchain-ai/langchain/pull/4420
f2150285a495fc530a7707218ea4980c17a170e5
2b1403612614127da4e3bd3d22595ce7b3eb1540
"2023-05-05T05:25:38Z"
python
"2023-05-09T22:00:04Z"
langchain/document_loaders/whatsapp_chat.py
import re from pathlib import Path from typing import List from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader def concatenate_rows(date: str, sender: str, text: str) -> str: """Combine message information in a readable format ready to be used.""" return f"{sender} on {date}: {text}\n\n" class WhatsAppChatLoader(BaseLoader): """Loader that loads WhatsApp messages text file.""" def __init__(self, path: str): """Initialize with path.""" self.file_path = path def load(self) -> List[Document]: """Load documents.""" p = Path(self.file_path) text_content = "" with open(p, encoding="utf8") as f: lines = f.readlines() message_line_regex = r"""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,153
WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info langchain 0.0.158 Mac OS M1 Python 3.11 ### Who can help? @ey ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Use 'Export Chat' feature on WhatsApp. 2. Observe this format for the txt file ``` [11/8/21, 9:41:32 AM] User name: Message text ``` The regular expression used by WhatsAppChatLoader doesn't parse this format successfully ### Expected behavior Parsing fails
https://github.com/langchain-ai/langchain/issues/4153
https://github.com/langchain-ai/langchain/pull/4420
f2150285a495fc530a7707218ea4980c17a170e5
2b1403612614127da4e3bd3d22595ce7b3eb1540
"2023-05-05T05:25:38Z"
python
"2023-05-09T22:00:04Z"
langchain/document_loaders/whatsapp_chat.py
\[? ( \d{1,2} [\/.] \d{1,2} [\/.] \d{2,4} ,\s \d{1,2} :\d{2} (?: :\d{2} )? (?:[ _](?:AM|PM))? ) \]? [\s-]* ([\w\s]+) [:]+ \s (.+) """ for line in lines: result = re.match(message_line_regex, line.strip(), flags=re.VERBOSE) if result: date, sender, text = result.groups() text_content += concatenate_rows(date, sender, text) metadata = {"source": str(p)} return [Document(page_content=text_content, metadata=metadata)]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,153
WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info langchain 0.0.158 Mac OS M1 Python 3.11 ### Who can help? @ey ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Use 'Export Chat' feature on WhatsApp. 2. Observe this format for the txt file ``` [11/8/21, 9:41:32 AM] User name: Message text ``` The regular expression used by WhatsAppChatLoader doesn't parse this format successfully ### Expected behavior Parsing fails
https://github.com/langchain-ai/langchain/issues/4153
https://github.com/langchain-ai/langchain/pull/4420
f2150285a495fc530a7707218ea4980c17a170e5
2b1403612614127da4e3bd3d22595ce7b3eb1540
"2023-05-05T05:25:38Z"
python
"2023-05-09T22:00:04Z"
tests/integration_tests/document_loaders/test_whatsapp_chat.py
from pathlib import Path from langchain.document_loaders import WhatsAppChatLoader def test_whatsapp_chat_loader() -> None: """Test WhatsAppChatLoader.""" file_path = Path(__file__).parent.parent / "examples" / "whatsapp_chat.txt" loader = WhatsAppChatLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 assert docs[0].metadata["source"] == str(file_path) assert docs[0].page_content == ( "James on 05.05.23, 15:48:11: Hi here\n\n" "User name on 11/8/21, 9:41:32 AM: Message 123\n\n" "User 2 on 1/23/23, 3:19 AM: Bye!\n\n" "User 1 on 1/23/23, 3:22_AM: And let me know if anything changes\n\n" )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
"""Wrapper around ChromaDB embeddings platform.""" from __future__ import annotations import logging import uuid from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import xor_args from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance if TYPE_CHECKING: import chromadb import chromadb.config logger = logging.getLogger(__name__) def _results_to_docs(results: Any) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
return [doc for doc, _ in _results_to_docs_and_scores(results)] def _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]: return [ (Document(page_content=result[0], metadata=result[1] or {}), result[2]) for result in zip( results["documents"][0], results["metadatas"][0], results["distances"][0], ) ] class Chroma(VectorStore): """Wrapper around ChromaDB embeddings platform. To use, you should have the ``chromadb`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Chroma("langchain_store", embeddings.embed_query) """ _LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain" def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
self, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None, ) -> None: """Initialize with Chroma client.""" try: import chromadb
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
import chromadb.config except ImportError: raise ValueError( "Could not import chromadb python package. " "Please install it with `pip install chromadb`." ) if client is not None: self._client = client else: if client_settings: self._client_settings = client_settings else: self._client_settings = chromadb.config.Settings() if persist_directory is not None: self._client_settings = chromadb.config.Settings( chroma_db_impl="duckdb+parquet", persist_directory=persist_directory, ) self._client = chromadb.Client(self._client_settings) self._embedding_function = embedding_function self._persist_directory = persist_directory self._collection = self._client.get_or_create_collection( name=collection_name, embedding_function=self._embedding_function.embed_documents if self._embedding_function is not None else None, metadata=collection_metadata, ) @xor_args(("query_texts", "query_embeddings")) def __query_collection(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
self, query_texts: Optional[List[str]] = None, query_embeddings: Optional[List[List[float]]] = None, n_results: int = 4, where: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Query the chroma collection.""" try: import chromadb except ImportError: raise ValueError( "Could not import chromadb python package. " "Please install it with `pip install chromadb`." ) for i in range(n_results, 0, -1): try: return self._collection.query( query_texts=query_texts, query_embeddings=query_embeddings, n_results=i, where=where, **kwargs, ) except chromadb.errors.NotEnoughElementsException: logger.error( f"Chroma collection {self._collection.name} " f"contains fewer than {i} elements."
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
) raise chromadb.errors.NotEnoughElementsException( f"No documents found for Chroma collection {self._collection.name}" ) def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts (Iterable[str]): Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadatas. ids (Optional[List[str]], optional): Optional list of IDs. Returns: List[str]: List of IDs of the added texts. """ if ids is None: ids = [str(uuid.uuid1()) for _ in texts] embeddings = None if self._embedding_function is not None: embeddings = self._embedding_function.embed_documents(list(texts)) self._collection.add( metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids ) return ids def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
self, query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Run similarity search with Chroma. Args: query (str): Query text to search for. k (int): Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Document]: List of documents most similar to the query text. """ docs_and_scores = self.similarity_search_with_score(query, k, filter=filter) return [doc for doc, _ in docs_and_scores] def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
self, embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs most similar to embedding vector. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query vector. """ results = self.__query_collection( query_embeddings=embedding, n_results=k, where=filter ) return _results_to_docs(results) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
self, query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Run similarity search with Chroma with distance. Args: query (str): Query text to search for. k (int): Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Tuple[Document, float]]: List of documents most similar to the query text with distance in float. """ if self._embedding_function is None: results = self.__query_collection( query_texts=[query], n_results=k, where=filter ) else: query_embedding = self._embedding_function.embed_query(query) results = self.__query_collection( query_embeddings=[query_embedding], n_results=k, where=filter ) return _results_to_docs_and_scores(results)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List of Documents selected by maximal marginal relevance. """ results = self.__query_collection( query_embeddings=embedding, n_results=fetch_k, where=filter, include=["metadatas", "documents", "distances", "embeddings"], )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
mmr_selected = maximal_marginal_relevance( np.array(embedding, dtype=np.float32), results["embeddings"][0], k=k, lambda_mult=lambda_mult, ) candidates = _results_to_docs(results) selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected] return selected_results def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding_function is None: raise ValueError( "For MMR search, you must specify an embedding function on" "creation." ) embedding = self._embedding_function.embed_query(query) docs = self.max_marginal_relevance_search_by_vector( embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter ) return docs def delete_collection(self) -> None: """Delete the collection.""" self._client.delete_collection(self._collection.name) def get(self) -> Chroma: """Gets the collection""" return self._collection.get() def persist(self) -> None: """Persist the collection. This can be used to explicitly persist the data to disk. It will also be called automatically when the object is destroyed. """ if self._persist_directory is None: raise ValueError( "You must specify a persist_directory on" "creation to persist the collection." ) self._client.persist() def update_document(self, document_id: str, document: Document) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
"""Update a document in the collection. Args: document_id (str): ID of the document to update. document (Document): Document to update. """ text = document.page_content metadata = document.metadata self._collection.update_document(document_id, text, metadata) @classmethod def from_texts( cls: Type[Chroma], texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any, ) -> Chroma: """Create a Chroma vectorstore from a raw documents.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Args: texts (List[str]): List of texts to add to the collection. collection_name (str): Name of the collection to create. persist_directory (Optional[str]): Directory to persist the collection. embedding (Optional[Embeddings]): Embedding function. Defaults to None. metadatas (Optional[List[dict]]): List of metadatas. Defaults to None. ids (Optional[List[str]]): List of document IDs. Defaults to None. client_settings (Optional[chromadb.config.Settings]): Chroma client settings Returns: Chroma: Chroma vectorstore. """ chroma_collection = cls( collection_name=collection_name, embedding_function=embedding, persist_directory=persist_directory, client_settings=client_settings, client=client, ) chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) return chroma_collection @classmethod def from_documents( cls: Type[Chroma], documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, persist_directory: Optional[str] = None,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
1,619
ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector```
Whereas it should be possible to filter by metadata : - ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score``` - ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions
https://github.com/langchain-ai/langchain/issues/1619
https://github.com/langchain-ai/langchain/pull/1621
28091c21018677355a124dd9c46213db3a229183
d383c0cb435273de83595160c14a2cb45dcecf2a
"2023-03-12T23:58:13Z"
python
"2023-05-09T23:43:00Z"
langchain/vectorstores/chroma.py
client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any, ) -> Chroma: """Create a Chroma vectorstore from a list of documents. If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Args: collection_name (str): Name of the collection to create. persist_directory (Optional[str]): Directory to persist the collection. ids (Optional[List[str]]): List of document IDs. Defaults to None. documents (List[Document]): List of documents to add to the vectorstore. embedding (Optional[Embeddings]): Embedding function. Defaults to None. client_settings (Optional[chromadb.config.Settings]): Chroma client settings Returns: Chroma: Chroma vectorstore. """ texts = [doc.page_content for doc in documents] metadatas = [doc.metadata for doc in documents] return cls.from_texts( texts=texts, embedding=embedding, metadatas=metadatas, ids=ids, collection_name=collection_name, persist_directory=persist_directory, client_settings=client_settings, client=client, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
"""Wrapper around Redis vector database.""" from __future__ import annotations import json import logging import uuid from typing import ( TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Type, ) import numpy as np from pydantic import BaseModel, root_validator from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
from langchain.schema import BaseRetriever from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore logger = logging.getLogger(__name__) if TYPE_CHECKING: from redis.client import Redis as RedisType from redis.commands.search.query import Query REDIS_REQUIRED_MODULES = [ {"name": "search", "ver": 20400}, {"name": "searchlight", "ver": 20400}, ] def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None: """Check if the correct Redis modules are installed.""" installed_modules = client.module_list() installed_modules = { module[b"name"].decode("utf-8"): module for module in installed_modules } for module in required_modules: if module["name"] in installed_modules and int( installed_modules[module["name"]][b"ver"] ) >= int(module["ver"]): return error_message = ( "You must add the RediSearch (>= 2.4) module from Redis Stack. " "Please refer to Redis Stack docs: https://redis.io/docs/stack/" ) logging.error(error_message) raise ValueError(error_message) def _check_index_exists(client: RedisType, index_name: str) -> bool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
"""Check if Redis index exists.""" try: client.ft(index_name).info() except: logger.info("Index does not exist") return False logger.info("Index already exists") return True def _redis_key(prefix: str) -> str: """Redis key schema for a given prefix.""" return f"{prefix}:{uuid.uuid4().hex}" def _redis_prefix(index_name: str) -> str: """Redis key prefix for a given index.""" return f"doc:{index_name}" def _default_relevance_score(val: float) -> float: return 1 - val class Redis(VectorStore):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
"""Wrapper around Redis vector database. To use, you should have the ``redis`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Redis( redis_url="redis://username:password@localhost:6379" index_name="my-index", embedding_function=embeddings.embed_query, ) """ def __init__( self, redis_url: str, index_name: str, embedding_function: Callable, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", relevance_score_fn: Optional[ Callable[[float], float]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
] = _default_relevance_score, **kwargs: Any, ): """Initialize with necessary components.""" try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) self.embedding_function = embedding_function self.index_name = index_name try: redis_client = redis.from_url(redis_url, **kwargs) _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES) except ValueError as e: raise ValueError(f"Redis failed to connect: {e}") self.client = redis_client self.content_key = content_key self.metadata_key = metadata_key self.vector_key = vector_key self.relevance_score_fn = relevance_score_fn def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) if not _check_index_exists(self.client, self.index_name): distance_metric = ( "COSINE" ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any, ) -> List[str]: """Add more texts to the vectorstore. Args: texts (Iterable[str]): Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional): Optional pre-generated embeddings. Defaults to None. keys (Optional[List[str]], optional): Optional key values to use as ids. Defaults to None. batch_size (int, optional): Batch size to use for writes. Defaults to 1000. Returns: List[str]: List of ids added to the vectorstore """ ids = [] prefix = _redis_prefix(self.index_name) pipeline = self.client.pipeline(transaction=False) for i, text in enumerate(texts): key = keys[i] if keys else _redis_key(prefix) metadata = metadatas[i] if metadatas else {}
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
embedding = embeddings[i] if embeddings else self.embedding_function(text) pipeline.hset( key, mapping={ self.content_key: text, self.vector_key: np.array(embedding, dtype=np.float32).tobytes(), self.metadata_key: json.dumps(metadata), }, ) ids.append(key) if i % batch_size == 0: pipeline.execute() pipeline.execute() return ids def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """ Returns the most similar indexed documents to the query text. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. Returns: List[Document]: A list of documents that are most similar to the query text. """ docs_and_scores = self.similarity_search_with_score(query, k=k) return [doc for doc, _ in docs_and_scores] def similarity_search_limit_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any ) -> List[Document]: """ Returns the most similar indexed documents to the query text within the score_threshold range. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. score_threshold (float): The minimum matching score required for a document to be considered a match. Defaults to 0.2. Because the similarity calculation algorithm is based on cosine similarity, the smaller the angle, the higher the similarity. Returns: List[Document]: A list of documents that are most similar to the query text, including the match score for each document. Note: If there are no documents that satisfy the score_threshold value, an empty list is returned. """ docs_and_scores = self.similarity_search_with_score(query, k=k) return [doc for doc, score in docs_and_scores if score < score_threshold] def _prepare_query(self, k: int) -> Query:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
try: from redis.commands.search.query import Query except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) hybrid_fields = "*" base_query = ( f"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]" ) return_fields = [self.metadata_key, self.content_key, "vector_score"] return ( Query(base_query) .return_fields(*return_fields) .sort_by("vector_score") .paging(0, k) .dialect(2) ) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
self, query: str, k: int = 4 ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query and score for each """ embedding = self.embedding_function(query) redis_query = self._prepare_query(k) params_dict: Mapping[str, str] = { "vector": np.array(embedding) .astype(dtype=np.float32) .tobytes() }
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
results = self.client.ft(self.index_name).search(redis_query, params_dict) docs = [ ( Document( page_content=result.content, metadata=json.loads(result.metadata) ), float(result.vector_score), ) for result in results.docs ] return docs def _similarity_search_with_relevance_scores( self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ if self.relevance_score_fn is None: raise ValueError( "relevance_score_fn must be provided to" " Weaviate constructor to normalize scores" ) docs_and_scores = self.similarity_search_with_score(query, k=k) return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores] @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") if not index_name: index_name = uuid.uuid4().hex instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) embeddings = embedding.embed_documents(texts) instance._create_index(dim=len(embeddings[0])) instance.add_texts(texts, metadatas, embeddings) return instance @staticmethod def drop_index(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
index_name: str, delete_documents: bool, **kwargs: Any, ) -> bool: """ Drop a Redis search index. Args: index_name (str): Name of the index to drop. delete_documents (bool): Whether to drop the associated documents. Returns: bool: Whether or not the drop was successful. """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url") client = redis.from_url(url=redis_url, **kwargs) except ValueError as e: raise ValueError(f"Your redis connected error: {e}") try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
client.ft(index_name).dropindex(delete_documents) logger.info("Drop index") return True except: return False @classmethod def from_existing_index( cls, embedding: Embeddings, index_name: str, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", **kwargs: Any, ) -> Redis: """Connect to an existing Redis index.""" redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
client = redis.from_url(url=redis_url, **kwargs) _check_redis_module_exist(client, REDIS_REQUIRED_MODULES) assert _check_index_exists( client, index_name ), f"Index {index_name} does not exist" except Exception as e: raise ValueError(f"Redis failed to connect: {e}") return cls( redis_url, index_name, embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) def as_retriever(self, **kwargs: Any) -> BaseRetriever: return RedisVectorStoreRetriever(vectorstore=self, **kwargs) class RedisVectorStoreRetriever(BaseRetriever, BaseModel): vectorstore: Redis search_type: str = "similarity" k: int = 4 score_threshold: float = 0.4 class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @root_validator() def validate_search_type(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
langchain/vectorstores/redis.py
"""Validate search type.""" if "search_type" in values: search_type = values["search_type"] if search_type not in ("similarity", "similarity_limit"): raise ValueError(f"search_type of {search_type} not allowed.") return values def get_relevant_documents(self, query: str) -> List[Document]: if self.search_type == "similarity": docs = self.vectorstore.similarity_search(query, k=self.k) elif self.search_type == "similarity_limit": docs = self.vectorstore.similarity_search_limit_score( query, k=self.k, score_threshold=self.score_threshold ) else: raise ValueError(f"search_type of {self.search_type} not allowed.") return docs async def aget_relevant_documents(self, query: str) -> List[Document]: raise NotImplementedError("RedisVectorStoreRetriever does not support async") def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" return self.vectorstore.add_documents(documents, **kwargs) async def aadd_documents( self, documents: List[Document], **kwargs: Any ) -> List[str]: """Add documents to vectorstore.""" return await self.vectorstore.aadd_documents(documents, **kwargs)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
tests/integration_tests/vectorstores/test_redis.py
"""Test Redis functionality.""" from langchain.docstore.document import Document from langchain.vectorstores.redis import Redis from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings TEST_INDEX_NAME = "test" TEST_REDIS_URL = "redis://localhost:6379" TEST_SINGLE_RESULT = [Document(page_content="foo")] TEST_RESULT = [Document(page_content="foo"), Document(page_content="foo")] def drop(index_name: str) -> bool: return Redis.drop_index( index_name=index_name, delete_documents=True, redis_url=TEST_REDIS_URL ) def test_redis() -> None: """Test end to end construction and search.""" texts = ["foo", "bar", "baz"] docsearch = Redis.from_texts(texts, FakeEmbeddings(), redis_url=TEST_REDIS_URL) output = docsearch.similarity_search("foo", k=1) assert output == TEST_SINGLE_RESULT assert drop(docsearch.index_name) def test_redis_new_vector() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,368
Add distance metric param to to redis vectorstore index
### Feature request Redis vectorstore allows for three different distance metrics: `L2` (flat L2), `COSINE`, and `IP` (inner product). Currently, the `Redis._create_index` method hard codes the distance metric to COSINE. ```py def _create_index(self, dim: int = 1536) -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Constants distance_metric = ( "COSINE" # distance metric for the vectors (ex. COSINE, IP, L2) ) schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ``` This should be parameterized. ### Motivation I'd like to be able to use L2 distance metrics. ### Your contribution I've already forked and made a branch that parameterizes the distance metric in `langchain.vectorstores.redis`: ```py def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _create_index(self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE") -> None: try: from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) # Check if index exists if not _check_index_exists(self.client, self.index_name): # Define schema schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) # Create Redis Index self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) ... @classmethod def from_texts( cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") # Name of the search index if not given if not index_name: index_name = uuid.uuid4().hex # Create instance instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) # Create embeddings over documents embeddings = embedding.embed_documents(texts) # Create the search index instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) # Add data to Redis instance.add_texts(texts, metadatas, embeddings) return instance ``` I'll make the PR and link this issue
https://github.com/langchain-ai/langchain/issues/4368
https://github.com/langchain-ai/langchain/pull/4375
f46710d4087c3f27e95cfc4b2c96956d7c4560e8
f668251948c715ef3102b2bf84ff31aed45867b5
"2023-05-09T00:40:32Z"
python
"2023-05-11T07:20:01Z"
tests/integration_tests/vectorstores/test_redis.py
"""Test adding a new document""" texts = ["foo", "bar", "baz"] docsearch = Redis.from_texts(texts, FakeEmbeddings(), redis_url=TEST_REDIS_URL) docsearch.add_texts(["foo"]) output = docsearch.similarity_search("foo", k=2) assert output == TEST_RESULT assert drop(docsearch.index_name) def test_redis_from_existing() -> None: """Test adding a new document""" texts = ["foo", "bar", "baz"] Redis.from_texts( texts, FakeEmbeddings(), index_name=TEST_INDEX_NAME, redis_url=TEST_REDIS_URL ) docsearch2 = Redis.from_existing_index( FakeEmbeddings(), index_name=TEST_INDEX_NAME, redis_url=TEST_REDIS_URL ) output = docsearch2.similarity_search("foo", k=1) assert output == TEST_SINGLE_RESULT def test_redis_add_texts_to_existing() -> None: """Test adding a new document""" docsearch = Redis.from_existing_index( FakeEmbeddings(), index_name=TEST_INDEX_NAME, redis_url=TEST_REDIS_URL ) docsearch.add_texts(["foo"]) output = docsearch.similarity_search("foo", k=2) assert output == TEST_RESULT assert drop(TEST_INDEX_NAME)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
"""Web base loader class.""" import asyncio import logging import warnings from typing import Any, List, Optional, Union import aiohttp import requests from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader logger = logging.getLogger(__name__) default_header_template = { "User-Agent": "", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*" ";q=0.8", "Accept-Language": "en-US,en;q=0.5", "Referer": "https://www.google.com/", "DNT": "1", "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", } def _build_metadata(soup: Any, url: str) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
"""Build metadata from BeautifulSoup output.""" metadata = {"source": url} if title := soup.find("title"): metadata["title"] = title.get_text() if description := soup.find("meta", attrs={"name": "description"}): metadata["description"] = description.get("content", None) if html := soup.find("html"): metadata["language"] = html.get("lang", None) return metadata class WebBaseLoader(BaseLoader): """Loader that uses urllib and beautiful soup to load webpages.""" web_paths: List[str] requests_per_second: int = 2 """Max number of concurrent requests to make.""" default_parser: str = "html.parser" """Default parser to use for BeautifulSoup.""" def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
self, web_path: Union[str, List[str]], header_template: Optional[dict] = None ): """Initialize with webpage path.""" if isinstance(web_path, str): self.web_paths = [web_path] elif isinstance(web_path, List): self.web_paths = web_path self.session = requests.Session() try: import bs4 except ImportError: raise ValueError( "bs4 package not found, please install it with " "`pip install bs4`" ) try: from fake_useragent import UserAgent headers = header_template or default_header_template headers["User-Agent"] = UserAgent().random self.session.headers = dict(headers) except ImportError: logger.info( "fake_useragent not found, using default user agent. " "To get a realistic header for requests, `pip install fake_useragent`." ) @property def web_path(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
if len(self.web_paths) > 1: raise ValueError("Multiple webpaths found.") return self.web_paths[0] async def _fetch( self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5 ) -> str: async with aiohttp.ClientSession() as session: for i in range(retries): try: async with session.get( url, headers=self.session.headers ) as response: return await response.text() except aiohttp.ClientConnectionError as e: if i == retries - 1: raise else: logger.warning( f"Error fetching {url} with attempt " f"{i + 1}/{retries}: {e}. Retrying..." ) await asyncio.sleep(cooldown * backoff**i) raise ValueError("retry count exceeded") async def _fetch_with_rate_limit(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
self, url: str, semaphore: asyncio.Semaphore ) -> str: async with semaphore: return await self._fetch(url) async def fetch_all(self, urls: List[str]) -> Any: """Fetch all urls concurrently with rate limiting.""" semaphore = asyncio.Semaphore(self.requests_per_second) tasks = [] for url in urls: task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore)) tasks.append(task) try: from tqdm.asyncio import tqdm_asyncio return await tqdm_asyncio.gather( *tasks, desc="Fetching pages", ascii=True, mininterval=1 ) except ImportError: warnings.warn("For better logging of progress, `pip install tqdm`") return await asyncio.gather(*tasks) @staticmethod def _check_parser(parser: str) -> None: """Check that parser is valid for bs4.""" valid_parsers = ["html.parser", "lxml", "xml", "lxml-xml", "html5lib"] if parser not in valid_parsers: raise ValueError( "`parser` must be one of " + ", ".join(valid_parsers) + "." ) def scrape_all(self, urls: List[str], parser: Union[str, None] = None) -> List[Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
"""Fetch all urls, then return soups for all results.""" from bs4 import BeautifulSoup results = asyncio.run(self.fetch_all(urls)) final_results = [] for i, result in enumerate(results): url = urls[i] if parser is None: if url.endswith(".xml"): parser = "xml" else: parser = self.default_parser self._check_parser(parser) final_results.append(BeautifulSoup(result, parser)) return final_results def _scrape(self, url: str, parser: Union[str, None] = None) -> Any: from bs4 import BeautifulSoup if parser is None: if url.endswith(".xml"): parser = "xml" else: parser = self.default_parser self._check_parser(parser) html_doc = self.session.get(url) html_doc.encoding = html_doc.apparent_encoding return BeautifulSoup(html_doc.text, parser) def scrape(self, parser: Union[str, None] = None) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,167
User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. ``` loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load() ``` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi Team, When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend. `loader = WebBaseLoader(url, header_template={ 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36', }) data = loader.load()` printing the headers in the INIT function shows the headers are passed in the template BUT in the load function or scrape the self.sessions.headers shows FIX set the default_header_template in INIT if header template present NOTE: this is due to Loading a page on WPENGINE who wont allow python user agents LangChain 0.0.158 Python 3.11 ### Expected behavior Not throw 403 when calling loader. Modifying INIT and setting the session headers works if the template is passed
https://github.com/langchain-ai/langchain/issues/4167
https://github.com/langchain-ai/langchain/pull/4579
372a5113ff1cce613f78d58c9e79e7c49aa60fac
3b6206af49a32d947a75965a5167c8726e1d5639
"2023-05-05T10:04:47Z"
python
"2023-05-15T03:09:27Z"
langchain/document_loaders/web_base.py
"""Scrape data from webpage and return it in BeautifulSoup format.""" if parser is None: parser = self.default_parser return self._scrape(self.web_path, parser) def load(self) -> List[Document]: """Load text from the url(s) in web_path.""" docs = [] for path in self.web_paths: soup = self._scrape(path) text = soup.get_text() metadata = _build_metadata(soup, path) docs.append(Document(page_content=text, metadata=metadata)) return docs def aload(self) -> List[Document]: """Load text from the urls in web_path async into Documents.""" results = self.scrape_all(self.web_paths) docs = [] for i in range(len(results)): soup = results[i] text = soup.get_text() metadata = _build_metadata(soup, self.web_paths[i]) docs.append(Document(page_content=text, metadata=metadata)) return docs
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""Loader that loads YouTube transcript.""" from __future__ import annotations import logging from pathlib import Path from typing import Any, Dict, List, Optional from pydantic import root_validator from pydantic.dataclasses import dataclass from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader logger = logging.getLogger(__name__) SCOPES = ["https://www.googleapis.com/auth/youtube.readonly"] @dataclass class GoogleApiClient:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""A Generic Google Api Client. To use, you should have the ``google_auth_oauthlib,youtube_transcript_api,google`` python package installed. As the google api expects credentials you need to set up a google account and register your Service. "https://developers.google.com/docs/api/quickstart/python" Example: .. code-block:: python from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) """ credentials_path: Path = Path.home() / ".credentials" / "credentials.json" service_account_path: Path = Path.home() / ".credentials" / "credentials.json" token_path: Path = Path.home() / ".credentials" / "token.json" def __post_init__(self) -> None: self.creds = self._load_credentials() @root_validator def validate_channel_or_videoIds_is_set( cls, values: Dict[str, Any] ) -> Dict[str, Any]: """Validate that either folder_id or document_ids is set, but not both.""" if not values.get("credentials_path") and not values.get( "service_account_path" ): raise ValueError("Must specify either channel_name or video_ids") return values def _load_credentials(self) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""Load credentials.""" try: from google.auth.transport.requests import Request from google.oauth2 import service_account from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from youtube_transcript_api import YouTubeTranscriptApi except ImportError: raise ImportError( "You must run" "`pip install --upgrade " "google-api-python-client google-auth-httplib2 " "google-auth-oauthlib " "youtube-transcript-api` " "to use the Google Drive loader" ) creds = None if self.service_account_path.exists(): return service_account.Credentials.from_service_account_file( str(self.service_account_path) ) if self.token_path.exists(): creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES) if not creds or not creds.valid:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( str(self.credentials_path), SCOPES ) creds = flow.run_local_server(port=0) with open(self.token_path, "w") as token: token.write(creds.to_json()) return creds class YoutubeLoader(BaseLoader): """Loader that loads Youtube transcripts.""" def __init__( self, video_id: str, add_video_info: bool = False, language: str = "en", continue_on_failure: bool = False, ): """Initialize with YouTube video ID.""" self.video_id = video_id self.add_video_info = add_video_info self.language = language self.continue_on_failure = continue_on_failure @classmethod def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader: """Given youtube URL, load video.""" video_id = youtube_url.split("youtube.com/watch?v=")[-1] return cls(video_id, **kwargs) def load(self) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""Load documents.""" try: from youtube_transcript_api import ( NoTranscriptFound, TranscriptsDisabled, YouTubeTranscriptApi, ) except ImportError: raise ImportError( "Could not import youtube_transcript_api python package. " "Please install it with `pip install youtube-transcript-api`." ) metadata = {"source": self.video_id} if self.add_video_info: video_info = self._get_video_info() metadata.update(video_info) try: transcript_list = YouTubeTranscriptApi.list_transcripts(self.video_id) except TranscriptsDisabled: return [] try: transcript = transcript_list.find_transcript([self.language]) except NoTranscriptFound: en_transcript = transcript_list.find_transcript(["en"]) transcript = en_transcript.translate(self.language) transcript_pieces = transcript.fetch() transcript = " ".join([t["text"].strip(" ") for t in transcript_pieces]) return [Document(page_content=transcript, metadata=metadata)]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
def _get_video_info(self) -> dict: """Get important video information. Components are: - title - description - thumbnail url, - publish_date - channel_author - and more. """ try: from pytube import YouTube except ImportError: raise ImportError( "Could not import pytube python package. " "Please install it with `pip install pytube`." ) yt = YouTube(f"https://www.youtube.com/watch?v={self.video_id}") video_info = { "title": yt.title, "description": yt.description, "view_count": yt.views, "thumbnail_url": yt.thumbnail_url, "publish_date": yt.publish_date, "length": yt.length, "author": yt.author, } return video_info @dataclass class GoogleApiYoutubeLoader(BaseLoader):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""Loader that loads all Videos from a Channel To use, you should have the ``googleapiclient,youtube_transcript_api`` python package installed. As the service needs a google_api_client, you first have to initialize the GoogleApiClient. Additionally you have to either provide a channel name or a list of videoids "https://developers.google.com/docs/api/quickstart/python" Example: .. code-block:: python from langchain.document_loaders import GoogleApiClient from langchain.document_loaders import GoogleApiYoutubeLoader google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) loader = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name = "CodeAesthetic" ) load.load() """ google_api_client: GoogleApiClient channel_name: Optional[str] = None video_ids: Optional[List[str]] = None add_video_info: bool = True captions_language: str = "en" continue_on_failure: bool = False def __post_init__(self) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
self.youtube_client = self._build_youtube_client(self.google_api_client.creds) def _build_youtube_client(self, creds: Any) -> Any: try: from googleapiclient.discovery import build from youtube_transcript_api import YouTubeTranscriptApi except ImportError: raise ImportError( "You must run" "`pip install --upgrade " "google-api-python-client google-auth-httplib2 " "google-auth-oauthlib " "youtube-transcript-api` " "to use the Google Drive loader" ) return build("youtube", "v3", credentials=creds) @root_validator def validate_channel_or_videoIds_is_set(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
cls, values: Dict[str, Any] ) -> Dict[str, Any]: """Validate that either folder_id or document_ids is set, but not both.""" if not values.get("channel_name") and not values.get("video_ids"): raise ValueError("Must specify either channel_name or video_ids") return values def _get_transcripe_for_video_id(self, video_id: str) -> str: from youtube_transcript_api import NoTranscriptFound, YouTubeTranscriptApi transcript_list = YouTubeTranscriptApi.list_transcripts(video_id) try: transcript = transcript_list.find_transcript([self.captions_language]) except NoTranscriptFound: for available_transcript in transcript_list: transcript = available_transcript.translate(self.captions_language) continue transcript_pieces = transcript.fetch() return " ".join([t["text"].strip(" ") for t in transcript_pieces]) def _get_document_for_video_id(self, video_id: str, **kwargs: Any) -> Document:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
captions = self._get_transcripe_for_video_id(video_id) video_response = ( self.youtube_client.videos() .list( part="id,snippet", id=video_id, ) .execute() ) return Document( page_content=captions, metadata=video_response.get("items")[0], ) def _get_channel_id(self, channel_name: str) -> str: request = self.youtube_client.search().list( part="id", q=channel_name, type="channel", maxResults=1, ) response = request.execute() channel_id = response["items"][0]["id"]["channelId"] return channel_id def _get_document_for_channel(self, channel: str, **kwargs: Any) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
try: from youtube_transcript_api import ( NoTranscriptFound, TranscriptsDisabled, ) except ImportError: raise ImportError( "You must run" "`pip install --upgrade " "youtube-transcript-api` " "to use the youtube loader" ) channel_id = self._get_channel_id(channel) request = self.youtube_client.search().list( part="id,snippet", channelId=channel_id, maxResults=50, ) video_ids = [] while request is not None: response = request.execute()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
for item in response["items"]: if not item["id"].get("videoId"): continue meta_data = {"videoId": item["id"]["videoId"]} if self.add_video_info: item["snippet"].pop("thumbnails") meta_data.update(item["snippet"]) try: page_content = self._get_transcripe_for_video_id( item["id"]["videoId"] ) video_ids.append( Document( page_content=page_content, metadata=meta_data, ) ) except (TranscriptsDisabled, NoTranscriptFound) as e: if self.continue_on_failure: logger.error( "Error fetching transscript " + f" {item['id']['videoId']}, exception: {e}" ) else: raise e pass request = self.youtube_client.search().list_next(request, response) return video_ids def load(self) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,451
YoutubeLoader.from_youtube_url should handle common YT url formats
### Feature request `YoutubeLoader.from_youtube_url` accepts single URL format. It should be able to handle at least the most common types of Youtube urls out there. ### Motivation Current video id extraction is pretty naive. It doesn't handle anything than single specific type of youtube url. Any valid but different video address leads to exception. ### Your contribution I've prepared a PR where i've introduced `.extract_video_id` method. Under the hood it uses regex to find video id in most popular youtube urls. Regex is based on Youtube-dl solution which can be found here: https://github.com/ytdl-org/youtube-dl/blob/211cbfd5d46025a8e4d8f9f3d424aaada4698974/youtube_dl/extractor/youtube.py#L524
https://github.com/langchain-ai/langchain/issues/4451
https://github.com/langchain-ai/langchain/pull/4452
8b42e8a510d7cafc6ce787b9bcb7a2c92f973c96
c2761aa8f4266e97037aa25480b3c8e26e7417f3
"2023-05-10T11:09:22Z"
python
"2023-05-15T14:45:19Z"
langchain/document_loaders/youtube.py
"""Load documents.""" document_list = [] if self.channel_name: document_list.extend(self._get_document_for_channel(self.channel_name)) elif self.video_ids: document_list.extend( [ self._get_document_for_video_id(video_id) for video_id in self.video_ids ] ) else: raise ValueError("Must specify either channel_name or video_ids") return document_list
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,631
[feature] Add support for streaming response output to HuggingFaceTextGenInference LLM
### Feature request Per title, request is to add feature for streaming output response, something like this: ```python from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, stop_sequences=['</s>'], repetition_penalty=1.03, stream=True ) print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()])) ``` ### Motivation Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming. ### Your contribution Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633)
https://github.com/langchain-ai/langchain/issues/4631
https://github.com/langchain-ai/langchain/pull/4633
435b70da472525bfec4ced38a8446c878af2c27b
c70ae562b466ba9a6d0f587ab935fd9abee2bc87
"2023-05-13T16:16:48Z"
python
"2023-05-15T14:59:12Z"
langchain/llms/huggingface_text_gen_inference.py
"""Wrapper around Huggingface text generation inference API.""" from typing import Any, Dict, List, Optional from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM class HuggingFaceTextGenInference(LLM): """ HuggingFace text generation inference API. This class is a wrapper around the HuggingFace text generation inference API. It is used to generate text from a given prompt. Attributes: - max_new_tokens: The maximum number of tokens to generate. - top_k: The number of top-k tokens to consider when generating text. - top_p: The cumulative probability threshold for generating text. - typical_p: The typical probability threshold for generating text. - temperature: The temperature to use when generating text. - repetition_penalty: The repetition penalty to use when generating text. - stop_sequences: A list of stop sequences to use when generating text. - seed: The seed to use when generating text. - inference_server_url: The URL of the inference server to use. - timeout: The timeout value in seconds to use while connecting to inference server.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,631
[feature] Add support for streaming response output to HuggingFaceTextGenInference LLM
### Feature request Per title, request is to add feature for streaming output response, something like this: ```python from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, stop_sequences=['</s>'], repetition_penalty=1.03, stream=True ) print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()])) ``` ### Motivation Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming. ### Your contribution Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633)
https://github.com/langchain-ai/langchain/issues/4631
https://github.com/langchain-ai/langchain/pull/4633
435b70da472525bfec4ced38a8446c878af2c27b
c70ae562b466ba9a6d0f587ab935fd9abee2bc87
"2023-05-13T16:16:48Z"
python
"2023-05-15T14:59:12Z"
langchain/llms/huggingface_text_gen_inference.py
- client: The client object used to communicate with the inference server. Methods: - _call: Generates text based on a given prompt and stop sequences. - _llm_type: Returns the type of LLM. """ """ Example: .. code-block:: python llm = HuggingFaceTextGenInference( inference_server_url = "http://localhost:8010/", max_new_tokens = 512, top_k = 10, top_p = 0.95, typical_p = 0.95, temperature = 0.01, repetition_penalty = 1.03, ) """ max_new_tokens: int = 512 top_k: Optional[int] = None top_p: Optional[float] = 0.95 typical_p: Optional[float] = 0.95 temperature: float = 0.8 repetition_penalty: Optional[float] = None stop_sequences: List[str] = Field(default_factory=list) seed: Optional[int] = None inference_server_url: str = "" timeout: int = 120 client: Any class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,631
[feature] Add support for streaming response output to HuggingFaceTextGenInference LLM
### Feature request Per title, request is to add feature for streaming output response, something like this: ```python from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, stop_sequences=['</s>'], repetition_penalty=1.03, stream=True ) print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()])) ``` ### Motivation Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming. ### Your contribution Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633)
https://github.com/langchain-ai/langchain/issues/4631
https://github.com/langchain-ai/langchain/pull/4633
435b70da472525bfec4ced38a8446c878af2c27b
c70ae562b466ba9a6d0f587ab935fd9abee2bc87
"2023-05-13T16:16:48Z"
python
"2023-05-15T14:59:12Z"
langchain/llms/huggingface_text_gen_inference.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that python package exists in environment.""" try: import text_generation values["client"] = text_generation.Client( values["inference_server_url"], timeout=values["timeout"] ) except ImportError: raise ValueError( "Could not import text_generation python package. " "Please install it with `pip install text_generation`." ) return values @property def _llm_type(self) -> str: """Return type of llm.""" return "hf_textgen_inference" def _call(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,631
[feature] Add support for streaming response output to HuggingFaceTextGenInference LLM
### Feature request Per title, request is to add feature for streaming output response, something like this: ```python from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, stop_sequences=['</s>'], repetition_penalty=1.03, stream=True ) print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()])) ``` ### Motivation Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming. ### Your contribution Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633)
https://github.com/langchain-ai/langchain/issues/4631
https://github.com/langchain-ai/langchain/pull/4633
435b70da472525bfec4ced38a8446c878af2c27b
c70ae562b466ba9a6d0f587ab935fd9abee2bc87
"2023-05-13T16:16:48Z"
python
"2023-05-15T14:59:12Z"
langchain/llms/huggingface_text_gen_inference.py
self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: if stop is None: stop = self.stop_sequences else: stop += self.stop_sequences res = self.client.generate( prompt, stop_sequences=stop, max_new_tokens=self.max_new_tokens, top_k=self.top_k, top_p=self.top_p, typical_p=self.typical_p, temperature=self.temperature, repetition_penalty=self.repetition_penalty, seed=self.seed, ) for stop_seq in stop: if stop_seq in res.generated_text: res.generated_text = res.generated_text[ : res.generated_text.index(stop_seq) ] return res.generated_text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
"""Load tools.""" import warnings from typing import Any, Dict, List, Optional, Callable, Tuple from mypy_extensions import Arg, KwArg from langchain.agents.tools import Tool from langchain.base_language import BaseLanguageModel from langchain.callbacks.base import BaseCallbackManager from langchain.callbacks.manager import Callbacks from langchain.chains.api import news_docs, open_meteo_docs, podcast_docs, tmdb_docs from langchain.chains.api.base import APIChain from langchain.chains.llm_math.base import LLMMathChain from langchain.chains.pal.base import PALChain from langchain.requests import TextRequestsWrapper from langchain.tools.arxiv.tool import ArxivQueryRun from langchain.tools.base import BaseTool from langchain.tools.bing_search.tool import BingSearchRun from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun from langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun from langchain.tools.metaphor_search.tool import MetaphorSearchResults from langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun from langchain.tools.graphql.tool import BaseGraphQLTool from langchain.tools.human.tool import HumanInputRun from langchain.tools.python.tool import PythonREPLTool from langchain.tools.requests.tool import ( RequestsDeleteTool, RequestsGetTool,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
RequestsPatchTool, RequestsPostTool, RequestsPutTool, ) from langchain.tools.scenexplain.tool import SceneXplainTool from langchain.tools.searx_search.tool import SearxSearchResults, SearxSearchRun from langchain.tools.shell.tool import ShellTool from langchain.tools.wikipedia.tool import WikipediaQueryRun from langchain.tools.wolfram_alpha.tool import WolframAlphaQueryRun from langchain.tools.openweathermap.tool import OpenWeatherMapQueryRun from langchain.utilities import ArxivAPIWrapper from langchain.utilities.bing_search import BingSearchAPIWrapper from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper from langchain.utilities.google_search import GoogleSearchAPIWrapper from langchain.utilities.google_serper import GoogleSerperAPIWrapper from langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper from langchain.utilities.awslambda import LambdaWrapper from langchain.utilities.graphql import GraphQLAPIWrapper from langchain.utilities.searx_search import SearxSearchWrapper from langchain.utilities.serpapi import SerpAPIWrapper from langchain.utilities.wikipedia import WikipediaAPIWrapper from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper def _get_python_repl() -> BaseTool: return PythonREPLTool() def _get_tools_requests_get() -> BaseTool: return RequestsGetTool(requests_wrapper=TextRequestsWrapper()) def _get_tools_requests_post() -> BaseTool: return RequestsPostTool(requests_wrapper=TextRequestsWrapper()) def _get_tools_requests_patch() -> BaseTool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
return RequestsPatchTool(requests_wrapper=TextRequestsWrapper()) def _get_tools_requests_put() -> BaseTool: return RequestsPutTool(requests_wrapper=TextRequestsWrapper()) def _get_tools_requests_delete() -> BaseTool: return RequestsDeleteTool(requests_wrapper=TextRequestsWrapper()) def _get_terminal() -> BaseTool: return ShellTool() _BASE_TOOLS: Dict[str, Callable[[], BaseTool]] = { "python_repl": _get_python_repl, "requests": _get_tools_requests_get, "requests_get": _get_tools_requests_get, "requests_post": _get_tools_requests_post, "requests_patch": _get_tools_requests_patch, "requests_put": _get_tools_requests_put, "requests_delete": _get_tools_requests_delete, "terminal": _get_terminal, } def _get_pal_math(llm: BaseLanguageModel) -> BaseTool: return Tool( name="PAL-MATH", description="A language model that is really good at solving complex word math problems. Input should be a fully worded hard word math problem.", func=PALChain.from_math_prompt(llm).run, ) def _get_pal_colored_objects(llm: BaseLanguageModel) -> BaseTool: return Tool( name="PAL-COLOR-OBJ", description="A language model that is really good at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.", func=PALChain.from_colored_object_prompt(llm).run, ) def _get_llm_math(llm: BaseLanguageModel) -> BaseTool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
return Tool( name="Calculator", description="Useful for when you need to answer questions about math.", func=LLMMathChain.from_llm(llm=llm).run, coroutine=LLMMathChain.from_llm(llm=llm).arun, ) def _get_open_meteo_api(llm: BaseLanguageModel) -> BaseTool: chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS) return Tool( name="Open Meteo API", description="Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.", func=chain.run, ) _LLM_TOOLS: Dict[str, Callable[[BaseLanguageModel], BaseTool]] = { "pal-math": _get_pal_math, "pal-colored-objects": _get_pal_colored_objects, "llm-math": _get_llm_math, "open-meteo-api": _get_open_meteo_api, } def _get_news_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool: news_api_key = kwargs["news_api_key"] chain = APIChain.from_llm_and_api_docs( llm, news_docs.NEWS_DOCS, headers={"X-Api-Key": news_api_key} ) return Tool( name="News API", description="Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.", func=chain.run, ) def _get_tmdb_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
tmdb_bearer_token = kwargs["tmdb_bearer_token"] chain = APIChain.from_llm_and_api_docs( llm, tmdb_docs.TMDB_DOCS, headers={"Authorization": f"Bearer {tmdb_bearer_token}"}, ) return Tool( name="TMDB API", description="Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.", func=chain.run, ) def _get_podcast_api(llm: BaseLanguageModel, **kwargs: Any) -> BaseTool: listen_api_key = kwargs["listen_api_key"] chain = APIChain.from_llm_and_api_docs( llm, podcast_docs.PODCAST_DOCS, headers={"X-ListenAPI-Key": listen_api_key}, ) return Tool( name="Podcast API", description="Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.", func=chain.run, ) def _get_lambda_api(**kwargs: Any) -> BaseTool: return Tool( name=kwargs["awslambda_tool_name"], description=kwargs["awslambda_tool_description"], func=LambdaWrapper(**kwargs).run, ) def _get_wolfram_alpha(**kwargs: Any) -> BaseTool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
return WolframAlphaQueryRun(api_wrapper=WolframAlphaAPIWrapper(**kwargs)) def _get_google_search(**kwargs: Any) -> BaseTool: return GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper(**kwargs)) def _get_wikipedia(**kwargs: Any) -> BaseTool: return WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper(**kwargs)) def _get_arxiv(**kwargs: Any) -> BaseTool: return ArxivQueryRun(api_wrapper=ArxivAPIWrapper(**kwargs)) def _get_google_serper(**kwargs: Any) -> BaseTool: return GoogleSerperRun(api_wrapper=GoogleSerperAPIWrapper(**kwargs)) def _get_google_serper_results_json(**kwargs: Any) -> BaseTool: return GoogleSerperResults(api_wrapper=GoogleSerperAPIWrapper(**kwargs)) def _get_google_search_results_json(**kwargs: Any) -> BaseTool: return GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper(**kwargs)) def _get_serpapi(**kwargs: Any) -> BaseTool: return Tool( name="Search", description="A search engine. Useful for when you need to answer questions about current events. Input should be a search query.", func=SerpAPIWrapper(**kwargs).run, coroutine=SerpAPIWrapper(**kwargs).arun, ) def _get_searx_search(**kwargs: Any) -> BaseTool: return SearxSearchRun(wrapper=SearxSearchWrapper(**kwargs)) def _get_searx_search_results_json(**kwargs: Any) -> BaseTool: wrapper_kwargs = {k: v for k, v in kwargs.items() if k != "num_results"} return SearxSearchResults(wrapper=SearxSearchWrapper(**wrapper_kwargs), **kwargs) def _get_bing_search(**kwargs: Any) -> BaseTool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
return BingSearchRun(api_wrapper=BingSearchAPIWrapper(**kwargs)) def _get_metaphor_search(**kwargs: Any) -> BaseTool: return MetaphorSearchResults(api_wrapper=MetaphorSearchAPIWrapper(**kwargs)) def _get_ddg_search(**kwargs: Any) -> BaseTool: return DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper(**kwargs)) def _get_human_tool(**kwargs: Any) -> BaseTool: return HumanInputRun(**kwargs) def _get_scenexplain(**kwargs: Any) -> BaseTool: return SceneXplainTool(**kwargs) def _get_graphql_tool(**kwargs: Any) -> BaseTool: graphql_endpoint = kwargs["graphql_endpoint"] wrapper = GraphQLAPIWrapper(graphql_endpoint=graphql_endpoint) return BaseGraphQLTool(graphql_wrapper=wrapper) def _get_openweathermap(**kwargs: Any) -> BaseTool: return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs)) _EXTRA_LLM_TOOLS: Dict[ str, Tuple[Callable[[Arg(BaseLanguageModel, "llm"), KwArg(Any)], BaseTool], List[str]], ] = { "news-api": (_get_news_api, ["news_api_key"]), "tmdb-api": (_get_tmdb_api, ["tmdb_bearer_token"]), "podcast-api": (_get_podcast_api, ["listen_api_key"]), } _EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[str]]] = { "wolfram-alpha": (_get_wolfram_alpha, ["wolfram_alpha_appid"]), "google-search": (_get_google_search, ["google_api_key", "google_cse_id"]), "google-search-results-json": ( _get_google_search_results_json, ["google_api_key", "google_cse_id", "num_results"],
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
), "searx-search-results-json": ( _get_searx_search_results_json, ["searx_host", "engines", "num_results", "aiosession"], ), "bing-search": (_get_bing_search, ["bing_subscription_key", "bing_search_url"]), "metaphor-search": (_get_metaphor_search, ["metaphor_api_key"]), "ddg-search": (_get_ddg_search, []), "google-serper": (_get_google_serper, ["serper_api_key", "aiosession"]), "google-serper-results-json": ( _get_google_serper_results_json, ["serper_api_key", "aiosession"], ), "serpapi": (_get_serpapi, ["serpapi_api_key", "aiosession"]), "searx-search": (_get_searx_search, ["searx_host", "engines", "aiosession"]), "wikipedia": (_get_wikipedia, ["top_k_results", "lang"]), "arxiv": ( _get_arxiv, ["top_k_results", "load_max_docs", "load_all_available_meta"], ), "human": (_get_human_tool, ["prompt_func", "input_func"]), "awslambda": ( _get_lambda_api, ["awslambda_tool_name", "awslambda_tool_description", "function_name"], ), "sceneXplain": (_get_scenexplain, []), "graphql": (_get_graphql_tool, ["graphql_endpoint"]), "openweathermap-api": (_get_openweathermap, ["openweathermap_api_key"]), } def _handle_callbacks(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
callback_manager: Optional[BaseCallbackManager], callbacks: Callbacks ) -> Callbacks: if callback_manager is not None: warnings.warn( "callback_manager is deprecated. Please use callbacks instead.", DeprecationWarning, ) if callbacks is not None: raise ValueError( "Cannot specify both callback_manager and callbacks arguments." ) return callback_manager return callbacks def load_huggingface_tool( task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any, ) -> BaseTool: try: from transformers import load_tool except ImportError: raise ValueError( "HuggingFace tools require the libraries `transformers>=4.29.0`" " and `huggingface_hub>=0.14.1` to be installed."
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
" Please install it with" " `pip install --upgrade transformers huggingface_hub`." ) hf_tool = load_tool( task_or_repo_id, model_repo_id=model_repo_id, token=token, remote=remote, **kwargs, ) outputs = hf_tool.outputs if set(outputs) != {"text"}: raise NotImplementedError("Multimodal outputs not supported yet.") inputs = hf_tool.inputs if set(inputs) != {"text"}: raise NotImplementedError("Multimodal inputs not supported yet.") return Tool.from_function( hf_tool.__call__, name=hf_tool.name, description=hf_tool.description ) def load_tools( tool_names: List[str], llm: Optional[BaseLanguageModel] = None, callbacks: Callbacks = None, **kwargs: Any, ) -> List[BaseTool]: """Load tools based on their name. Args: tool_names: name of tools to load. llm: Optional language model, may be needed to initialize certain tools. callbacks: Optional callback manager or list of callback handlers.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
If not provided, default global callback manager will be used. Returns: List of tools. """ tools = [] callbacks = _handle_callbacks( callback_manager=kwargs.get("callback_manager"), callbacks=callbacks ) for name in tool_names: if name == "requests": warnings.warn( "tool name `requests` is deprecated - " "please use `requests_all` or specify the requests method" ) if name == "requests_all": requests_method_tools = [ _tool for _tool in _BASE_TOOLS if _tool.startswith("requests_") ] tool_names.extend(requests_method_tools) elif name in _BASE_TOOLS: tools.append(_BASE_TOOLS[name]()) elif name in _LLM_TOOLS: if llm is None: raise ValueError(f"Tool {name} requires an LLM to be provided") tool = _LLM_TOOLS[name](llm) tools.append(tool) elif name in _EXTRA_LLM_TOOLS: if llm is None: raise ValueError(f"Tool {name} requires an LLM to be provided")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/agents/load_tools.py
_get_llm_tool_func, extra_keys = _EXTRA_LLM_TOOLS[name] missing_keys = set(extra_keys).difference(kwargs) if missing_keys: raise ValueError( f"Tool {name} requires some parameters that were not " f"provided: {missing_keys}" ) sub_kwargs = {k: kwargs[k] for k in extra_keys} tool = _get_llm_tool_func(llm=llm, **sub_kwargs) tools.append(tool) elif name in _EXTRA_OPTIONAL_TOOLS: _get_tool_func, extra_keys = _EXTRA_OPTIONAL_TOOLS[name] sub_kwargs = {k: kwargs[k] for k in extra_keys if k in kwargs} tool = _get_tool_func(**sub_kwargs) tools.append(tool) else: raise ValueError(f"Got unknown tool {name}") if callbacks is not None: for tool in tools: tool.callbacks = callbacks return tools def get_all_tool_names() -> List[str]: """Get a list of all possible tool names.""" return ( list(_BASE_TOOLS) + list(_EXTRA_OPTIONAL_TOOLS) + list(_EXTRA_LLM_TOOLS) + list(_LLM_TOOLS) )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/utilities/serpapi.py
"""Chain that calls SerpAPI. Heavily borrowed from https://github.com/ofirpress/self-ask """ import os import sys from typing import Any, Dict, Optional, Tuple import aiohttp from pydantic import BaseModel, Extra, Field, root_validator from langchain.utils import get_from_dict_or_env class HiddenPrints: """Context manager to hide prints.""" def __enter__(self) -> None: """Open file to pipe stdout to.""" self._original_stdout = sys.stdout sys.stdout = open(os.devnull, "w") def __exit__(self, *_: Any) -> None: """Close file that stdout was piped to.""" sys.stdout.close() sys.stdout = self._original_stdout class SerpAPIWrapper(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/utilities/serpapi.py
"""Wrapper around SerpAPI. To use, you should have the ``google-search-results`` python package installed, and the environment variable ``SERPAPI_API_KEY`` set with your API key, or pass `serpapi_api_key` as a named parameter to the constructor. Example: .. code-block:: python from langchain import SerpAPIWrapper serpapi = SerpAPIWrapper() """ search_engine: Any params: dict = Field( default={ "engine": "google", "google_domain": "google.com", "gl": "us", "hl": "en", } ) serpapi_api_key: Optional[str] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/utilities/serpapi.py
"""Validate that api key and python package exists in environment.""" serpapi_api_key = get_from_dict_or_env( values, "serpapi_api_key", "SERPAPI_API_KEY" ) values["serpapi_api_key"] = serpapi_api_key try: from serpapi import GoogleSearch values["search_engine"] = GoogleSearch except ImportError: raise ValueError( "Could not import serpapi python package. " "Please install it with `pip install google-search-results`." ) return values async def arun(self, query: str, **kwargs: Any) -> str: """Run query through SerpAPI and parse result async.""" return self._process_response(await self.aresults(query)) def run(self, query: str, **kwargs: Any) -> str: """Run query through SerpAPI and parse result.""" return self._process_response(self.results(query)) def results(self, query: str) -> dict: """Run query through SerpAPI and return the raw result.""" params = self.get_params(query) with HiddenPrints(): search = self.search_engine(params) res = search.get_dict() return res async def aresults(self, query: str) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/utilities/serpapi.py
"""Use aiohttp to run query through SerpAPI and return the results async.""" def construct_url_and_params() -> Tuple[str, Dict[str, str]]: params = self.get_params(query) params["source"] = "python" if self.serpapi_api_key: params["serp_api_key"] = self.serpapi_api_key params["output"] = "json" url = "https://serpapi.com/search" return url, params url, params = construct_url_and_params() if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.get(url, params=params) as response: res = await response.json() else: async with self.aiosession.get(url, params=params) as response: res = await response.json() return res def get_params(self, query: str) -> Dict[str, str]: """Get parameters for SerpAPI.""" _params = { "api_key": self.serpapi_api_key, "q": query, } params = {**self.params, **_params} return params @staticmethod def _process_response(res: dict) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,328
Issue: Can not configure serpapi base url via env
### Issue you'd like to raise. Currently, the base URL of Serpapi is been hard coded. While some search services(e.g. Bing, `BING_SEARCH_URL`) are configurable. In some companies, the original can not be allowed to access. We need to use nginx redirect proxy. So we need to make the base URL configurable via env. ### Suggestion: Make serpapi base url configurable via env
https://github.com/langchain-ai/langchain/issues/4328
https://github.com/langchain-ai/langchain/pull/4402
cb802edf75539872e18068edec8e21216f3e51d2
5111bec54071e42a7865766dc8bb8dc72c1d13b4
"2023-05-08T09:27:24Z"
python
"2023-05-15T21:25:25Z"
langchain/utilities/serpapi.py
"""Process response from SerpAPI.""" if "error" in res.keys(): raise ValueError(f"Got error from SerpAPI: {res['error']}") if "answer_box" in res.keys() and "answer" in res["answer_box"].keys(): toret = res["answer_box"]["answer"] elif "answer_box" in res.keys() and "snippet" in res["answer_box"].keys(): toret = res["answer_box"]["snippet"] elif ( "answer_box" in res.keys() and "snippet_highlighted_words" in res["answer_box"].keys() ): toret = res["answer_box"]["snippet_highlighted_words"][0] elif ( "sports_results" in res.keys() and "game_spotlight" in res["sports_results"].keys() ): toret = res["sports_results"]["game_spotlight"] elif ( "knowledge_graph" in res.keys() and "description" in res["knowledge_graph"].keys() ): toret = res["knowledge_graph"]["description"] elif "snippet" in res["organic_results"][0].keys(): toret = res["organic_results"][0]["snippet"] else: toret = "No good search result found" return toret
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"""Wrapper around HuggingFace APIs.""" from typing import Any, Dict, List, Mapping, Optional import requests from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens from langchain.utils import get_from_dict_or_env VALID_TASKS = ("text2text-generation", "text-generation") class HuggingFaceEndpoint(LLM):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"""Wrapper around HuggingFaceHub Inference Endpoints. To use, you should have the ``huggingface_hub`` python package installed, and the environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass it as a named parameter to the constructor. Only supports `text-generation` and `text2text-generation` for now. Example: .. code-block:: python from langchain.llms import HuggingFaceEndpoint endpoint_url = ( "https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud" ) hf = HuggingFaceEndpoint( endpoint_url=endpoint_url, huggingfacehub_api_token="my-api-key" ) """ endpoint_url: str = "" """Endpoint URL to use.""" task: Optional[str] = None """Task to call the model with. Should be a task that returns `generated_text`.""" model_kwargs: Optional[dict] = None """Key word arguments to pass to the model.""" huggingfacehub_api_token: Optional[str] = None class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" huggingfacehub_api_token = get_from_dict_or_env( values, "huggingfacehub_api_token", "HUGGINGFACEHUB_API_TOKEN" ) try: from huggingface_hub.hf_api import HfApi try: HfApi( endpoint="https://huggingface.co", token=huggingfacehub_api_token, ).whoami() except Exception as e: raise ValueError( "Could not authenticate with huggingface_hub. " "Please check your API token." ) from e except ImportError: raise ValueError( "Could not import huggingface_hub python package. " "Please install it with `pip install huggingface_hub`." ) values["huggingfacehub_api_token"] = huggingfacehub_api_token return values @property def _identifying_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"""Get the identifying parameters.""" _model_kwargs = self.model_kwargs or {} return { **{"endpoint_url": self.endpoint_url, "task": self.task}, **{"model_kwargs": _model_kwargs}, } @property def _llm_type(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"""Return type of llm.""" return "huggingface_endpoint" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: """Call out to HuggingFace Hub's inference endpoint. Args: prompt: The prompt to pass into the model. stop: Optional list of stop words to use when generating. Returns: The string generated by the model. Example: .. code-block:: python response = hf("Tell me a joke.") """ _model_kwargs = self.model_kwargs or {} parameter_payload = {"inputs": prompt, "parameters": _model_kwargs} headers = { "Authorization": f"Bearer {self.huggingfacehub_api_token}",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_endpoint.py
"Content-Type": "application/json", } try: response = requests.post( self.endpoint_url, headers=headers, json=parameter_payload ) except requests.exceptions.RequestException as e: raise ValueError(f"Error raised by inference endpoint: {e}") generated_text = response.json() if "error" in generated_text: raise ValueError( f"Error raised by inference API: {generated_text['error']}" ) if self.task == "text-generation": text = generated_text[0]["generated_text"][len(prompt) :] elif self.task == "text2text-generation": text = generated_text[0]["generated_text"] else: raise ValueError( f"Got invalid task {self.task}, " f"currently only {VALID_TASKS} are supported" ) if stop is not None: text = enforce_stop_tokens(text, stop) return text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_hub.py
"""Wrapper around HuggingFace APIs.""" from typing import Any, Dict, List, Mapping, Optional from pydantic import Extra, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens from langchain.utils import get_from_dict_or_env DEFAULT_REPO_ID = "gpt2" VALID_TASKS = ("text2text-generation", "text-generation") class HuggingFaceHub(LLM): """Wrapper around HuggingFaceHub models. To use, you should have the ``huggingface_hub`` python package installed, and the environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass it as a named parameter to the constructor. Only supports `text-generation` and `text2text-generation` for now. Example: .. code-block:: python from langchain.llms import HuggingFaceHub hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key") """ client: Any repo_id: str = DEFAULT_REPO_ID """Model name to use.""" task: Optional[str] = None """Task to call the model with. Should be a task that returns `generated_text`.""" model_kwargs: Optional[dict] = None """Key word arguments to pass to the model.""" huggingfacehub_api_token: Optional[str] = None class Config: