status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
name = "single_arg_tool" description = "A single arged tool with kwargs" def _run( self, *args: Any, run_manager: Optional[CallbackManagerForToolRun] = None, **kwargs: Any, ) -> str: return "foo" async def _arun( self, *args: Any, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, **kwargs: Any, ) -> str: raise NotImplementedError tool2 = _VarArgToolWithKwargs() assert tool2.is_single_input def test_structured_args_decorator_no_infer_schema() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Test functionality with structured arguments parsed as a decorator.""" @tool(infer_schema=False) def structured_tool_input( arg1: int, arg2: Union[float, datetime], opt_arg: Optional[dict] = None ) -> str: """Return the arguments directly.""" return f"{arg1}, {arg2}, {opt_arg}" assert isinstance(structured_tool_input, BaseTool) assert structured_tool_input.name == "structured_tool_input" args = {"arg1": 1, "arg2": 0.001, "opt_arg": {"foo": "bar"}} with pytest.raises(ToolException): assert structured_tool_input.run(args) def test_structured_single_str_decorator_no_infer_schema() -> None: """Test functionality with structured arguments parsed as a decorator.""" @tool(infer_schema=False) def unstructured_tool_input(tool_input: str) -> str: """Return the arguments directly.""" assert isinstance(tool_input, str) return f"{tool_input}" assert isinstance(unstructured_tool_input, BaseTool) assert unstructured_tool_input.args_schema is None assert unstructured_tool_input.run("foo") == "foo" def test_structured_tool_types_parsed() -> None: """Test the non-primitive types are correctly passed to structured tools.""" class SomeEnum(Enum): A = "a" B = "b" class SomeBaseModel(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
foo: str @tool def structured_tool( some_enum: SomeEnum, some_base_model: SomeBaseModel, ) -> dict: """Return the arguments directly.""" return { "some_enum": some_enum, "some_base_model": some_base_model, } assert isinstance(structured_tool, StructuredTool) args = { "some_enum": SomeEnum.A.value, "some_base_model": SomeBaseModel(foo="bar").dict(), } result = structured_tool.run(json.loads(json.dumps(args))) expected = { "some_enum": SomeEnum.A, "some_base_model": SomeBaseModel(foo="bar"), } assert result == expected def test_base_tool_inheritance_base_schema() -> None: """Test schema is correctly inferred when inheriting from BaseTool.""" class _MockSimpleTool(BaseTool): name = "simple_tool" description = "A Simple Tool" def _run(self, tool_input: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
return f"{tool_input}" async def _arun(self, tool_input: str) -> str: raise NotImplementedError simple_tool = _MockSimpleTool() assert simple_tool.args_schema is None expected_args = {"tool_input": {"title": "Tool Input", "type": "string"}} assert simple_tool.args == expected_args def test_tool_lambda_args_schema() -> None: """Test args schema inference when the tool argument is a lambda function.""" tool = Tool( name="tool", description="A tool", func=lambda tool_input: tool_input, ) assert tool.args_schema is None expected_args = {"tool_input": {"type": "string"}} assert tool.args == expected_args def test_structured_tool_from_function_docstring() -> None: """Test that structured tools can be created from functions.""" def foo(bar: int, baz: str) -> str: """Docstring Args: bar: int baz: str """ raise NotImplementedError() structured_tool = StructuredTool.from_function(foo) assert structured_tool.name == "foo"
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
assert structured_tool.args == { "bar": {"title": "Bar", "type": "integer"}, "baz": {"title": "Baz", "type": "string"}, } assert structured_tool.args_schema.schema() == { "properties": { "bar": {"title": "Bar", "type": "integer"}, "baz": {"title": "Baz", "type": "string"}, }, "title": "fooSchemaSchema", "type": "object", "required": ["bar", "baz"], } prefix = "foo(bar: int, baz: str) -> str - " assert foo.__doc__ is not None assert structured_tool.description == prefix + foo.__doc__.strip() def test_structured_tool_lambda_multi_args_schema() -> None: """Test args schema inference when the tool argument is a lambda function.""" tool = StructuredTool.from_function( name="tool", description="A tool", func=lambda tool_input, other_arg: f"{tool_input}{other_arg}", ) assert tool.args_schema is not None expected_args = { "tool_input": {"title": "Tool Input"}, "other_arg": {"title": "Other Arg"}, } assert tool.args == expected_args def test_tool_partial_function_args_schema() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Test args schema inference when the tool argument is a partial function.""" def func(tool_input: str, other_arg: str) -> str: assert isinstance(tool_input, str) assert isinstance(other_arg, str) return tool_input + other_arg tool = Tool( name="tool", description="A tool", func=partial(func, other_arg="foo"), ) assert tool.run("bar") == "barfoo" def test_empty_args_decorator() -> None: """Test inferred schema of decorated fn with no args.""" @tool def empty_tool_input() -> str: """Return a constant.""" return "the empty result" assert isinstance(empty_tool_input, BaseTool) assert empty_tool_input.name == "empty_tool_input" assert empty_tool_input.args == {} assert empty_tool_input.run({}) == "the empty result" def test_named_tool_decorator() -> None: """Test functionality when arguments are provided as input to decorator.""" @tool("search") def search_api(query: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Search the API for the query.""" assert isinstance(query, str) return f"API result - {query}" assert isinstance(search_api, BaseTool) assert search_api.name == "search" assert not search_api.return_direct assert search_api.run({"query": "foo"}) == "API result - foo" def test_named_tool_decorator_return_direct() -> None: """Test functionality when arguments and return direct are provided as input.""" @tool("search", return_direct=True) def search_api(query: str, *args: Any) -> str: """Search the API for the query.""" return "API result" assert isinstance(search_api, BaseTool) assert search_api.name == "search" assert search_api.return_direct assert search_api.run({"query": "foo"}) == "API result" def test_unnamed_tool_decorator_return_direct() -> None: """Test functionality when only return direct is provided.""" @tool(return_direct=True) def search_api(query: str) -> str: """Search the API for the query.""" assert isinstance(query, str) return "API result" assert isinstance(search_api, BaseTool) assert search_api.name == "search_api" assert search_api.return_direct assert search_api.run({"query": "foo"}) == "API result" def test_tool_with_kwargs() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Test functionality when only return direct is provided.""" @tool(return_direct=True) def search_api( arg_0: str, arg_1: float = 4.3, ping: str = "hi", ) -> str: """Search the API for the query.""" return f"arg_0={arg_0}, arg_1={arg_1}, ping={ping}" assert isinstance(search_api, BaseTool) result = search_api.run( tool_input={ "arg_0": "foo", "arg_1": 3.2, "ping": "pong", } ) assert result == "arg_0=foo, arg_1=3.2, ping=pong" result = search_api.run( tool_input={ "arg_0": "foo", } ) assert result == "arg_0=foo, arg_1=4.3, ping=hi" result = search_api.run("foobar") assert result == "arg_0=foobar, arg_1=4.3, ping=hi" def test_missing_docstring() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Test error is raised when docstring is missing.""" with pytest.raises(AssertionError, match="Function must have a docstring"): @tool def search_api(query: str) -> str: return "API result" def test_create_tool_positional_args() -> None: """Test that positional arguments are allowed.""" test_tool = Tool("test_name", lambda x: x, "test_description") assert test_tool("foo") == "foo" assert test_tool.name == "test_name" assert test_tool.description == "test_description" assert test_tool.is_single_input def test_create_tool_keyword_args() -> None: """Test that keyword arguments are allowed.""" test_tool = Tool(name="test_name", func=lambda x: x, description="test_description") assert test_tool.is_single_input assert test_tool("foo") == "foo" assert test_tool.name == "test_name" assert test_tool.description == "test_description" @pytest.mark.asyncio async def test_create_async_tool() -> None: """Test that async tools are allowed.""" async def _test_func(x: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
return x test_tool = Tool( name="test_name", func=lambda x: x, description="test_description", coroutine=_test_func, ) assert test_tool.is_single_input assert test_tool("foo") == "foo" assert test_tool.name == "test_name" assert test_tool.description == "test_description" assert test_tool.coroutine is not None assert await test_tool.arun("foo") == "foo" class _FakeExceptionTool(BaseTool):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
name = "exception" description = "an exception-throwing tool" exception: Exception = ToolException() def _run(self) -> str: raise self.exception async def _arun(self) -> str: raise self.exception def test_exception_handling_bool() -> None: _tool = _FakeExceptionTool(handle_tool_error=True) expected = "Tool execution error" actual = _tool.run({}) assert expected == actual def test_exception_handling_str() -> None: expected = "foo bar" _tool = _FakeExceptionTool(handle_tool_error=expected) actual = _tool.run({}) assert expected == actual def test_exception_handling_callable() -> None: expected = "foo bar" handling = lambda _: expected _tool = _FakeExceptionTool(handle_tool_error=handling) actual = _tool.run({}) assert expected == actual def test_exception_handling_non_tool_exception() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
_tool = _FakeExceptionTool(exception=ValueError()) with pytest.raises(ValueError): _tool.run({}) @pytest.mark.asyncio async def test_async_exception_handling_bool() -> None: _tool = _FakeExceptionTool(handle_tool_error=True) expected = "Tool execution error" actual = await _tool.arun({}) assert expected == actual @pytest.mark.asyncio async def test_async_exception_handling_str() -> None: expected = "foo bar" _tool = _FakeExceptionTool(handle_tool_error=expected) actual = await _tool.arun({}) assert expected == actual @pytest.mark.asyncio async def test_async_exception_handling_callable() -> None: expected = "foo bar" handling = lambda _: expected _tool = _FakeExceptionTool(handle_tool_error=handling) actual = await _tool.arun({}) assert expected == actual @pytest.mark.asyncio async def test_async_exception_handling_non_tool_exception() -> None: _tool = _FakeExceptionTool(exception=ValueError()) with pytest.raises(ValueError): await _tool.arun({}) def test_structured_tool_from_function() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
5,456
Tools: Inconsistent callbacks/run_manager parameter
### System Info MacOS Ventura 13.3.1 (a) python = "^3.9" langchain = "0.0.185" ### Who can help? @agola11 @vowelparrot ### Related Components - Agents / Agent Executors - Tools / Toolkits - Callbacks/Tracing ### Reproduction I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as: ```python def get_list_of_products( self, profile_description: str, run_manager: CallbackManagerForToolRun ): ``` Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`, ```python new_argument_supported = signature(self.func).parameters.get("callbacks") ``` So the tool can't run, with the error being: ```bash TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager' ``` This behavior applies to Structured tool and Tool. ### Expected behavior Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter.
https://github.com/langchain-ai/langchain/issues/5456
https://github.com/langchain-ai/langchain/pull/6483
b4fe7f3a0995cc6a0111a7e71347eddf2d61f132
980c8651743b653f994ad6b97a27b0fa31ee92b4
"2023-05-30T17:09:02Z"
python
"2023-06-23T08:48:27Z"
tests/unit_tests/tools/test_base.py
"""Test that structured tools can be created from functions.""" def foo(bar: int, baz: str) -> str: """Docstring Args: bar: int baz: str """ raise NotImplementedError() structured_tool = StructuredTool.from_function(foo) assert structured_tool.name == "foo" assert structured_tool.args == { "bar": {"title": "Bar", "type": "integer"}, "baz": {"title": "Baz", "type": "string"}, } assert structured_tool.args_schema.schema() == { "title": "fooSchemaSchema", "type": "object", "properties": { "bar": {"title": "Bar", "type": "integer"}, "baz": {"title": "Baz", "type": "string"}, }, "required": ["bar", "baz"], } prefix = "foo(bar: int, baz: str) -> str - " assert foo.__doc__ is not None assert structured_tool.description == prefix + foo.__doc__.strip()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,610
ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'
### System Info langchain version: 0.0.209 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm ### Expected behavior I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm. This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model.
https://github.com/langchain-ai/langchain/issues/6610
https://github.com/langchain-ai/langchain/pull/6652
c2b25c17c5c8d35a7297f665f2327b9671855898
9e52134d30203a9125532621abcd5a102e3f2bfb
"2023-06-22T20:56:38Z"
python
"2023-06-23T20:38:21Z"
langchain/chat_models/vertexai.py
"""Wrapper around Google VertexAI chat-based models.""" from dataclasses import dataclass, field from typing import Any, Dict, List, Optional from pydantic import root_validator from langchain.callbacks.manager import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun, ) from langchain.chat_models.base import BaseChatModel from langchain.llms.vertexai import _VertexAICommon, is_codey_model from langchain.schema import ( AIMessage, BaseMessage, ChatGeneration, ChatResult, HumanMessage, SystemMessage, ) from langchain.utilities.vertexai import raise_vertex_import_error @dataclass class _MessagePair:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,610
ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'
### System Info langchain version: 0.0.209 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm ### Expected behavior I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm. This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model.
https://github.com/langchain-ai/langchain/issues/6610
https://github.com/langchain-ai/langchain/pull/6652
c2b25c17c5c8d35a7297f665f2327b9671855898
9e52134d30203a9125532621abcd5a102e3f2bfb
"2023-06-22T20:56:38Z"
python
"2023-06-23T20:38:21Z"
langchain/chat_models/vertexai.py
"""InputOutputTextPair represents a pair of input and output texts.""" question: HumanMessage answer: AIMessage @dataclass class _ChatHistory: """InputOutputTextPair represents a pair of input and output texts.""" history: List[_MessagePair] = field(default_factory=list) system_message: Optional[SystemMessage] = None def _parse_chat_history(history: List[BaseMessage]) -> _ChatHistory: """Parse a sequence of messages into history. A sequence should be either (SystemMessage, HumanMessage, AIMessage, HumanMessage, AIMessage, ...) or (HumanMessage, AIMessage, HumanMessage, AIMessage, ...). CodeChat does not support SystemMessage. Args: history: The list of messages to re-create the history of the chat. Returns: A parsed chat history. Raises: ValueError: If a sequence of message is odd, or a human message is not followed by a message from AI (e.g., Human, Human, AI or AI, AI, Human). """ if not history: return _ChatHistory() first_message = history[0] system_message = first_message if isinstance(first_message, SystemMessage) else None chat_history = _ChatHistory(system_message=system_message) messages_left = history[1:] if system_message else history
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,610
ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'
### System Info langchain version: 0.0.209 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm ### Expected behavior I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm. This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model.
https://github.com/langchain-ai/langchain/issues/6610
https://github.com/langchain-ai/langchain/pull/6652
c2b25c17c5c8d35a7297f665f2327b9671855898
9e52134d30203a9125532621abcd5a102e3f2bfb
"2023-06-22T20:56:38Z"
python
"2023-06-23T20:38:21Z"
langchain/chat_models/vertexai.py
if len(messages_left) % 2 != 0: raise ValueError( f"Amount of messages in history should be even, got {len(messages_left)}!" ) for question, answer in zip(messages_left[::2], messages_left[1::2]): if not isinstance(question, HumanMessage) or not isinstance(answer, AIMessage): raise ValueError( "A human message should follow a bot one, " f"got {question.type}, {answer.type}." ) chat_history.history.append(_MessagePair(question=question, answer=answer)) return chat_history class ChatVertexAI(_VertexAICommon, BaseChatModel): """Wrapper around Vertex AI large language models.""" model_name: str = "chat-bison" @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that the python package exists in environment.""" cls._try_init_vertexai(values) try: if is_codey_model(values["model_name"]): from vertexai.preview.language_models import CodeChatModel values["client"] = CodeChatModel.from_pretrained(values["model_name"]) else: from vertexai.preview.language_models import ChatModel values["client"] = ChatModel.from_pretrained(values["model_name"]) except ImportError: raise_vertex_import_error() return values def _generate(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,610
ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'
### System Info langchain version: 0.0.209 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm ### Expected behavior I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm. This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model.
https://github.com/langchain-ai/langchain/issues/6610
https://github.com/langchain-ai/langchain/pull/6652
c2b25c17c5c8d35a7297f665f2327b9671855898
9e52134d30203a9125532621abcd5a102e3f2bfb
"2023-06-22T20:56:38Z"
python
"2023-06-23T20:38:21Z"
langchain/chat_models/vertexai.py
self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: """Generate next turn in the conversation. Args: messages: The history of the conversation as a list of messages. Code chat does not support context. stop: The list of stop words (optional). run_manager: The CallbackManager for LLM run, it's not used at the moment. Returns: The ChatResult that contains outputs generated by the model. Raises: ValueError: if the last message in the list is not from human. """ if not messages:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,610
ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'
### System Info langchain version: 0.0.209 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm ### Expected behavior I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm. This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model.
https://github.com/langchain-ai/langchain/issues/6610
https://github.com/langchain-ai/langchain/pull/6652
c2b25c17c5c8d35a7297f665f2327b9671855898
9e52134d30203a9125532621abcd5a102e3f2bfb
"2023-06-22T20:56:38Z"
python
"2023-06-23T20:38:21Z"
langchain/chat_models/vertexai.py
raise ValueError( "You should provide at least one message to start the chat!" ) question = messages[-1] if not isinstance(question, HumanMessage): raise ValueError( f"Last message in the list should be from human, got {question.type}." ) history = _parse_chat_history(messages[:-1]) context = history.system_message.content if history.system_message else None params = {**self._default_params, **kwargs} if not self.is_codey_model: params["context"] = context chat = self.client.start_chat(**params) for pair in history.history: chat._history.append((pair.question.content, pair.answer.content)) response = chat.send_message(question.content, **params) text = self._enforce_stop_words(response.text, stop) return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))]) async def _agenerate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: raise NotImplementedError( """Vertex AI doesn't support async requests at the moment.""" )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
"""Wrapper around weaviate vector database.""" from __future__ import annotations import datetime from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type from uuid import uuid4 import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance def _default_schema(index_name: str) -> Dict: return { "class": index_name, "properties": [ { "name": "text", "dataType": ["text"], } ], } def _create_weaviate_client(**kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
client = kwargs.get("client") if client is not None: return client weaviate_url = get_from_dict_or_env(kwargs, "weaviate_url", "WEAVIATE_URL") try: weaviate_api_key = get_from_dict_or_env( kwargs, "weaviate_api_key", "WEAVIATE_API_KEY", None ) except ValueError: weaviate_api_key = None try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip instal weaviate-client`" ) auth = ( weaviate.auth.AuthApiKey(api_key=weaviate_api_key) if weaviate_api_key is not None else None ) client = weaviate.Client(weaviate_url, auth_client_secret=auth) return client def _default_score_normalizer(val: float) -> float:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
return 1 - 1 / (1 + np.exp(val)) def _json_serializable(value: Any) -> Any: if isinstance(value, datetime.datetime): return value.isoformat() return value class Weaviate(VectorStore): """Wrapper around Weaviate vector database. To use, you should have the ``weaviate-client`` python package installed. Example: .. code-block:: python import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
self, client: Any, index_name: str, text_key: str, embedding: Optional[Embeddings] = None, attributes: Optional[List[str]] = None, relevance_score_fn: Optional[ Callable[[float], float] ] = _default_score_normalizer, by_text: bool = True, ): """Initialize with Weaviate client.""" try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self._index_name = index_name self._embedding = embedding self._text_key = text_key self._query_attrs = [self._text_key] self._relevance_score_fn = relevance_score_fn
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
self._by_text = by_text if attributes is not None: self._query_attrs.extend(attributes) def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Upload texts with metadata (properties) to Weaviate.""" from weaviate.util import get_valid_uuid ids = [] with self._client.batch as batch: for i, text in enumerate(texts): data_properties = {self._text_key: text} if metadatas is not None: for key, val in metadatas[i].items(): data_properties[key] = _json_serializable(val) _id = get_valid_uuid(uuid4()) if "uuids" in kwargs: _id = kwargs["uuids"][i] elif "ids" in kwargs: _id = kwargs["ids"][i] if self._embedding is not None: vector = self._embedding.embed_documents([text])[0] else:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
vector = None batch.add_data_object( data_object=data_properties, class_name=self._index_name, uuid=_id, vector=vector, ) ids.append(_id) return ids def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ if self._by_text: return self.similarity_search_by_text(query, k, **kwargs) else: if self._embedding is None: raise ValueError( "_embedding cannot be None for similarity_search when " "_by_text=False" ) embedding = self._embedding.embed_query(query) return self.similarity_search_by_vector(embedding, k, **kwargs) def similarity_search_by_text(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) if kwargs.get("additional"): query_obj = query_obj.with_additional(kwargs.get("additional")) result = query_obj.with_near_text(content).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
self, embedding: List[float], k: int = 4, **kwargs: Any ) -> List[Document]: """Look up similar documents by embedding vector in Weaviate.""" vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) if kwargs.get("additional"): query_obj = query_obj.with_additional(kwargs.get("additional")) result = query_obj.with_near_vector(vector).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
docs.append(Document(page_content=text, metadata=res)) return docs def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding is not None: embedding = self._embedding.embed_query(query) else: raise ValueError( "max_marginal_relevance_search requires a suitable Embeddings object" )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
return self.max_marginal_relevance_search_by_vector( embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs ) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) results = (
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
query_obj.with_additional("vector") .with_near_vector(vector) .with_limit(fetch_k) .do() ) payload = results["data"]["Get"][self._index_name] embeddings = [result["_additional"]["vector"] for result in payload] mmr_selected = maximal_marginal_relevance( np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult ) docs = [] for idx in mmr_selected: text = payload[idx].pop(self._text_key) payload[idx].pop("_additional") meta = payload[idx] docs.append(Document(page_content=text, metadata=meta)) return docs def similarity_search_with_score( self, query: str, k: int = 4, **kwargs: Any ) -> List[Tuple[Document, float]]: """ Return list of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. """ if self._embedding is None: raise ValueError( "_embedding cannot be None for similarity_search_with_score" ) content: Dict[str, Any] = {"concepts": [query]}
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) if not self._by_text: embedding = self._embedding.embed_query(query) vector = {"vector": embedding} result = ( query_obj.with_near_vector(vector) .with_limit(k) .with_additional("vector") .do() ) else: result = ( query_obj.with_near_text(content) .with_limit(k) .with_additional("vector") .do() ) if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs_and_scores = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) score = np.dot( res["_additional"]["vector"], self._embedding.embed_query(query) ) docs_and_scores.append((Document(page_content=text, metadata=res), score)) return docs_and_scores def _similarity_search_with_relevance_scores(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ if self._relevance_score_fn is None: raise ValueError( "relevance_score_fn must be provided to" " Weaviate constructor to normalize scores" ) docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs) return [ (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores ] @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
cls: Type[Weaviate], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> Weaviate: """Construct Weaviate wrapper from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in the Weaviate instance. 3. Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
weaviate_url="http://localhost:8080" ) """ client = _create_weaviate_client(**kwargs) from weaviate.util import get_valid_uuid index_name = kwargs.get("index_name", f"LangChain_{uuid4().hex}") embeddings = embedding.embed_documents(texts) if embedding else None text_key = "text" schema = _default_schema(index_name) attributes = list(metadatas[0].keys()) if metadatas else None if not client.schema.contains(schema): client.schema.create_class(schema) with client.batch as batch: for i, text in enumerate(texts): data_properties = { text_key: text, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = metadatas[i][key] if "uuids" in kwargs: _id = kwargs["uuids"][i] else: _id = get_valid_uuid(uuid4())
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,582
Typo
### System Info latest version ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Typo on : https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49 Instal - > install ### Expected behavior typo corrected
https://github.com/langchain-ai/langchain/issues/6582
https://github.com/langchain-ai/langchain/pull/6595
f6fdabd20b3b14f8728f8c74d9711322400f9369
ba256b23f241e1669536f7e70c6365ceba7a9cfa
"2023-06-22T09:34:08Z"
python
"2023-06-23T21:56:54Z"
langchain/vectorstores/weaviate.py
params = { "uuid": _id, "data_object": data_properties, "class_name": index_name, } if embeddings is not None: params["vector"] = embeddings[i] batch.add_data_object(**params) batch.flush() relevance_score_fn = kwargs.get("relevance_score_fn") by_text: bool = kwargs.get("by_text", False) return cls( client, index_name, text_key, embedding=embedding, attributes=attributes, relevance_score_fn=relevance_score_fn, by_text=by_text, ) def delete(self, ids: List[str]) -> None: """Delete by vector IDs. Args: ids: List of ids to delete. """ for id in ids: self._client.data_object.delete(uuid=id)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
import random import string import tempfile import traceback from copy import deepcopy from pathlib import Path from typing import Any, Dict, List, Optional, Union from langchain.callbacks.base import BaseCallbackHandler from langchain.callbacks.utils import ( BaseMetadataCallbackHandler, flatten_dict, hash_string, import_pandas, import_spacy, import_textstat, ) from langchain.schema import AgentAction, AgentFinish, LLMResult from langchain.utils import get_from_dict_or_env def import_mlflow() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Import the mlflow python package and raise an error if it is not installed.""" try: import mlflow except ImportError: raise ImportError( "To use the mlflow callback manager you need to have the `mlflow` python " "package installed. Please install it with `pip install mlflow>=2.3.0`" ) return mlflow def analyze_text( text: str, nlp: Any = None, ) -> dict: """Analyze text using textstat and spacy. Parameters: text (str): The text to analyze. nlp (spacy.lang): The spacy language model to use for visualization. Returns: (dict): A dictionary containing the complexity metrics and visualization files serialized to HTML string. """ resp: Dict[str, Any] = {} textstat = import_textstat() spacy = import_spacy() text_complexity_metrics = { "flesch_reading_ease": textstat.flesch_reading_ease(text), "flesch_kincaid_grade": textstat.flesch_kincaid_grade(text), "smog_index": textstat.smog_index(text), "coleman_liau_index": textstat.coleman_liau_index(text),
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"automated_readability_index": textstat.automated_readability_index(text), "dale_chall_readability_score": textstat.dale_chall_readability_score(text), "difficult_words": textstat.difficult_words(text), "linsear_write_formula": textstat.linsear_write_formula(text), "gunning_fog": textstat.gunning_fog(text), "fernandez_huerta": textstat.fernandez_huerta(text), "szigriszt_pazos": textstat.szigriszt_pazos(text), "gutierrez_polini": textstat.gutierrez_polini(text), "crawford": textstat.crawford(text), "gulpease_index": textstat.gulpease_index(text), "osman": textstat.osman(text), } resp.update({"text_complexity_metrics": text_complexity_metrics}) resp.update(text_complexity_metrics) if nlp is not None: doc = nlp(text) dep_out = spacy.displacy.render( doc, style="dep", jupyter=False, page=True ) ent_out = spacy.displacy.render( doc, style="ent", jupyter=False, page=True ) text_visualizations = { "dependency_tree": dep_out, "entities": ent_out, } resp.update(text_visualizations) return resp def construct_html_from_prompt_and_generation(prompt: str, generation: str) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Construct an html element from a prompt and a generation. Parameters: prompt (str): The prompt. generation (str): The generation. Returns: (str): The html string.""" formatted_prompt = prompt.replace("\n", "<br>") formatted_generation = generation.replace("\n", "<br>") return f""" <p style="color:black;">{formatted_prompt}:</p> <blockquote> <p style="color:green;"> {formatted_generation} </p> </blockquote> """ class MlflowLogger:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Callback Handler that logs metrics and artifacts to mlflow server. Parameters: name (str): Name of the run. experiment (str): Name of the experiment. tags (str): Tags to be attached for the run. tracking_uri (str): MLflow tracking server uri. This handler implements the helper functions to initialize, log metrics and artifacts to the mlflow server. """ def __init__(self, **kwargs: Any): self.mlflow = import_mlflow() tracking_uri = get_from_dict_or_env( kwargs, "tracking_uri", "MLFLOW_TRACKING_URI", "" ) self.mlflow.set_tracking_uri(tracking_uri) experiment_name = get_from_dict_or_env( kwargs, "experiment_name", "MLFLOW_EXPERIMENT_NAME" ) self.mlf_exp = self.mlflow.get_experiment_by_name(experiment_name) if self.mlf_exp is not None: self.mlf_expid = self.mlf_exp.experiment_id else: self.mlf_expid = self.mlflow.create_experiment(experiment_name) self.start_run(kwargs["run_name"], kwargs["run_tags"]) def start_run(self, name: str, tags: Dict[str, str]) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""To start a new run, auto generates the random suffix for name""" if name.endswith("-%"): rname = "".join(random.choices(string.ascii_uppercase + string.digits, k=7)) name = name.replace("%", rname) self.run = self.mlflow.MlflowClient().create_run( self.mlf_expid, run_name=name, tags=tags ) def finish_run(self) -> None: """To finish the run.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.end_run() def metric(self, key: str, value: float) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""To log metric to mlflow server.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_metric(key, value) def metrics( self, data: Union[Dict[str, float], Dict[str, int]], step: Optional[int] = 0 ) -> None: """To log all metrics in the input dict.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_metrics(data) def jsonf(self, data: Dict[str, Any], filename: str) -> None: """To log the input data as json file artifact.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_dict(data, f"{filename}.json") def table(self, name: str, dataframe) -> None: """To log the input pandas dataframe as a html table""" self.html(dataframe.to_html(), f"table_{name}") def html(self, html: str, filename: str) -> None: """To log the input html string as html file artifact.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_text(html, f"{filename}.html") def text(self, text: str, filename: str) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""To log the input text as text file artifact.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_text(text, f"{filename}.txt") def artifact(self, path: str) -> None: """To upload the file from given path as artifact.""" with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.log_artifact(path) def langchain_artifact(self, chain: Any) -> None: with self.mlflow.start_run( run_id=self.run.info.run_id, experiment_id=self.mlf_expid ): self.mlflow.langchain.log_model(chain, "langchain-model") class MlflowCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler): """Callback Handler that logs metrics and artifacts to mlflow server. Parameters: name (str): Name of the run. experiment (str): Name of the experiment. tags (str): Tags to be attached for the run. tracking_uri (str): MLflow tracking server uri. This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to mlflow server. """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
self, name: Optional[str] = "langchainrun-%", experiment: Optional[str] = "langchain", tags: Optional[Dict] = {}, tracking_uri: Optional[str] = None, ) -> None: """Initialize callback handler.""" import_pandas() import_textstat() import_mlflow() spacy = import_spacy() super().__init__() self.name = name self.experiment = experiment self.tags = tags self.tracking_uri = tracking_uri self.temp_dir = tempfile.TemporaryDirectory() self.mlflg = MlflowLogger( tracking_uri=self.tracking_uri, experiment_name=self.experiment, run_name=self.name, run_tags=self.tags, ) self.action_records: list = []
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
self.nlp = spacy.load("en_core_web_sm") self.metrics = { "step": 0, "starts": 0, "ends": 0, "errors": 0, "text_ctr": 0, "chain_starts": 0, "chain_ends": 0, "llm_starts": 0, "llm_ends": 0, "llm_streams": 0, "tool_starts": 0, "tool_ends": 0, "agent_ends": 0, } self.records: Dict[str, Any] = { "on_llm_start_records": [], "on_llm_token_records": [], "on_llm_end_records": [], "on_chain_start_records": [], "on_chain_end_records": [], "on_tool_start_records": [], "on_tool_end_records": [], "on_text_records": [], "on_agent_finish_records": [], "on_agent_action_records": [], "action_records": [], } def _reset(self) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
for k, v in self.metrics.items(): self.metrics[k] = 0 for k, v in self.records.items(): self.records[k] = [] def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: """Run when LLM starts.""" self.metrics["step"] += 1 self.metrics["llm_starts"] += 1 self.metrics["starts"] += 1 llm_starts = self.metrics["llm_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_start"}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) for idx, prompt in enumerate(prompts): prompt_resp = deepcopy(resp) prompt_resp["prompt"] = prompt self.records["on_llm_start_records"].append(prompt_resp) self.records["action_records"].append(prompt_resp) self.mlflg.jsonf(prompt_resp, f"llm_start_{llm_starts}_prompt_{idx}") def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Run when LLM generates a new token.""" self.metrics["step"] += 1 self.metrics["llm_streams"] += 1 llm_streams = self.metrics["llm_streams"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_new_token", "token": token}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_llm_token_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"llm_new_tokens_{llm_streams}") def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Run when LLM ends running.""" self.metrics["step"] += 1 self.metrics["llm_ends"] += 1 self.metrics["ends"] += 1 llm_ends = self.metrics["llm_ends"] resp: Dict[str, Any] = {} resp.update({"action": "on_llm_end"}) resp.update(flatten_dict(response.llm_output or {})) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) for generations in response.generations: for idx, generation in enumerate(generations): generation_resp = deepcopy(resp) generation_resp.update(flatten_dict(generation.dict())) generation_resp.update( analyze_text( generation.text, nlp=self.nlp, ) ) complexity_metrics: Dict[str, float] = generation_resp.pop("text_complexity_metrics") self.mlflg.metrics( complexity_metrics, step=self.metrics["step"], ) self.records["on_llm_end_records"].append(generation_resp) self.records["action_records"].append(generation_resp) self.mlflg.jsonf(resp, f"llm_end_{llm_ends}_generation_{idx}")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
dependency_tree = generation_resp["dependency_tree"] entities = generation_resp["entities"] self.mlflg.html(dependency_tree, "dep-" + hash_string(generation.text)) self.mlflg.html(entities, "ent-" + hash_string(generation.text)) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when LLM errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: """Run when chain starts running.""" self.metrics["step"] += 1 self.metrics["chain_starts"] += 1 self.metrics["starts"] += 1 chain_starts = self.metrics["chain_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_chain_start"}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) chain_input = ",".join([f"{k}={v}" for k, v in inputs.items()]) input_resp = deepcopy(resp) input_resp["inputs"] = chain_input self.records["on_chain_start_records"].append(input_resp) self.records["action_records"].append(input_resp) self.mlflg.jsonf(input_resp, f"chain_start_{chain_starts}") def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Run when chain ends running.""" self.metrics["step"] += 1 self.metrics["chain_ends"] += 1 self.metrics["ends"] += 1 chain_ends = self.metrics["chain_ends"] resp: Dict[str, Any] = {} chain_output = ",".join([f"{k}={v}" for k, v in outputs.items()]) resp.update({"action": "on_chain_end", "outputs": chain_output}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_chain_end_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"chain_end_{chain_ends}") def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when chain errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_tool_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any ) -> None: """Run when tool starts running.""" self.metrics["step"] += 1 self.metrics["tool_starts"] += 1 self.metrics["starts"] += 1 tool_starts = self.metrics["tool_starts"] resp: Dict[str, Any] = {} resp.update({"action": "on_tool_start", "input_str": input_str}) resp.update(flatten_dict(serialized)) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_tool_start_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"tool_start_{tool_starts}") def on_tool_end(self, output: str, **kwargs: Any) -> None: """Run when tool ends running.""" self.metrics["step"] += 1 self.metrics["tool_ends"] += 1 self.metrics["ends"] += 1 tool_ends = self.metrics["tool_ends"] resp: Dict[str, Any] = {} resp.update({"action": "on_tool_end", "output": output}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_tool_end_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"tool_end_{tool_ends}") def on_tool_error(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: """Run when tool errors.""" self.metrics["step"] += 1 self.metrics["errors"] += 1 def on_text(self, text: str, **kwargs: Any) -> None: """ Run when agent is ending. """ self.metrics["step"] += 1 self.metrics["text_ctr"] += 1 text_ctr = self.metrics["text_ctr"] resp: Dict[str, Any] = {} resp.update({"action": "on_text", "text": text}) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_text_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"on_text_{text_ctr}") def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Run when agent ends running.""" self.metrics["step"] += 1 self.metrics["agent_ends"] += 1 self.metrics["ends"] += 1 agent_ends = self.metrics["agent_ends"] resp: Dict[str, Any] = {} resp.update( { "action": "on_agent_finish", "output": finish.return_values["output"], "log": finish.log, } ) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_agent_finish_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"agent_finish_{agent_ends}") def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"""Run on agent action.""" self.metrics["step"] += 1 self.metrics["tool_starts"] += 1 self.metrics["starts"] += 1 tool_starts = self.metrics["tool_starts"] resp: Dict[str, Any] = {} resp.update( { "action": "on_agent_action", "tool": action.tool, "tool_input": action.tool_input, "log": action.log, } ) resp.update(self.metrics) self.mlflg.metrics(self.metrics, step=self.metrics["step"]) self.records["on_agent_action_records"].append(resp) self.records["action_records"].append(resp) self.mlflg.jsonf(resp, f"agent_action_{tool_starts}") def _create_session_analysis_df(self) -> Any: """Create a dataframe with all the information from the session.""" pd = import_pandas() on_llm_start_records_df = pd.DataFrame(self.records["on_llm_start_records"])
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
on_llm_end_records_df = pd.DataFrame(self.records["on_llm_end_records"]) llm_input_prompts_df = ( on_llm_start_records_df[["step", "prompt", "name"]] .dropna(axis=1) .rename({"step": "prompt_step"}, axis=1) ) complexity_metrics_columns = [] visualizations_columns = [] complexity_metrics_columns = [ "flesch_reading_ease", "flesch_kincaid_grade", "smog_index", "coleman_liau_index", "automated_readability_index", "dale_chall_readability_score", "difficult_words", "linsear_write_formula", "gunning_fog", "fernandez_huerta", "szigriszt_pazos", "gutierrez_polini", "crawford", "gulpease_index", "osman", ] visualizations_columns = ["dependency_tree", "entities"] llm_outputs_df = ( on_llm_end_records_df[ [
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
"step", "text", "token_usage_total_tokens", "token_usage_prompt_tokens", "token_usage_completion_tokens", ] + complexity_metrics_columns + visualizations_columns ] .dropna(axis=1) .rename({"step": "output_step", "text": "output"}, axis=1) ) session_analysis_df = pd.concat([llm_input_prompts_df, llm_outputs_df], axis=1) session_analysis_df["chat_html"] = session_analysis_df[ ["prompt", "output"] ].apply( lambda row: construct_html_from_prompt_and_generation( row["prompt"], row["output"] ), axis=1, ) return session_analysis_df def flush_tracker(self, langchain_asset: Any = None, finish: bool = False) -> None: pd = import_pandas() self.mlflg.table("action_records", pd.DataFrame(self.records["action_records"])) session_analysis_df = self._create_session_analysis_df() chat_html = session_analysis_df.pop("chat_html") chat_html = chat_html.replace("\n", "", regex=True) self.mlflg.table("session_analysis", pd.DataFrame(session_analysis_df)) self.mlflg.html("".join(chat_html.tolist()), "chat_html")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,472
DOC: Incorrect type for tags parameter in MLflow callback
### Issue with current documentation: In the documentation the tag type is string, but in the code it's a dictionary. The proposed fix is to change the following two lines "tags (str):" to "tags (dict):". https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120 https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225 ### Idea or request for content: _No response_
https://github.com/langchain-ai/langchain/issues/6472
https://github.com/langchain-ai/langchain/pull/6473
9187d2f3a97abc6d89daea9b5abfa652a425e1de
fe941cb54a80976bfc7575ce59a518ae428801ee
"2023-06-20T09:57:57Z"
python
"2023-06-26T09:12:23Z"
langchain/callbacks/mlflow_callback.py
if langchain_asset: if "langchain.chains.llm.LLMChain" in str(type(langchain_asset)): self.mlflg.langchain_artifact(langchain_asset) else: langchain_asset_path = str(Path(self.temp_dir.name, "model.json")) try: langchain_asset.save(langchain_asset_path) self.mlflg.artifact(langchain_asset_path) except ValueError: try: langchain_asset.save_agent(langchain_asset_path) self.mlflg.artifact(langchain_asset_path) except AttributeError: print("Could not save model.") traceback.print_exc() pass except NotImplementedError: print("Could not save model.") traceback.print_exc() pass except NotImplementedError: print("Could not save model.") traceback.print_exc() pass if finish: self.mlflg.finish_run() self._reset()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,756
Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info - Langchain: 0.0.215 - Platform: ubuntu - Python 3.10.12 ### Who can help? @vowelparrot https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Fails if agent initialized as follows: ```python agent = initialize_agent( agent='zero-shot-react-description', tools=tools, llm=llm, verbose=True, max_iterations=30, memory=ConversationBufferMemory(), handle_parsing_errors=True) ``` With ``` ... lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent tags_.append(agent.value) AttributeError: 'str' object has no attribute 'value' ```` ### Expected behavior Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown).
https://github.com/langchain-ai/langchain/issues/6756
https://github.com/langchain-ai/langchain/pull/6765
ba622764cb7ccf4667878289f959857348ef8c19
6d30acffcbea5807835839585132d3946bb81661
"2023-06-26T11:00:29Z"
python
"2023-06-26T16:28:11Z"
langchain/agents/initialize.py
"""Load agent.""" from typing import Any, Optional, Sequence from langchain.agents.agent import AgentExecutor from langchain.agents.agent_types import AgentType from langchain.agents.loading import AGENT_TO_CLASS, load_agent from langchain.base_language import BaseLanguageModel from langchain.callbacks.base import BaseCallbackManager from langchain.tools.base import BaseTool def initialize_agent( tools: Sequence[BaseTool], llm: BaseLanguageModel, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, *,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,756
Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info - Langchain: 0.0.215 - Platform: ubuntu - Python 3.10.12 ### Who can help? @vowelparrot https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Fails if agent initialized as follows: ```python agent = initialize_agent( agent='zero-shot-react-description', tools=tools, llm=llm, verbose=True, max_iterations=30, memory=ConversationBufferMemory(), handle_parsing_errors=True) ``` With ``` ... lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent tags_.append(agent.value) AttributeError: 'str' object has no attribute 'value' ```` ### Expected behavior Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown).
https://github.com/langchain-ai/langchain/issues/6756
https://github.com/langchain-ai/langchain/pull/6765
ba622764cb7ccf4667878289f959857348ef8c19
6d30acffcbea5807835839585132d3946bb81661
"2023-06-26T11:00:29Z"
python
"2023-06-26T16:28:11Z"
langchain/agents/initialize.py
tags: Optional[Sequence[str]] = None, **kwargs: Any, ) -> AgentExecutor: """Load an agent executor given tools and LLM. Args: tools: List of tools this agent has access to. llm: Language model to use as the agent. agent: Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager: CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path: Path to serialized agent to use. agent_kwargs: Additional key word arguments to pass to the underlying agent tags: Tags to apply to the traced runs. **kwargs: Additional key word arguments passed to the agent executor Returns: An agent executor """ tags_ = list(tags) if tags else [] if agent is None and agent_path is None: agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION if agent is not None and agent_path is not None: raise ValueError( "Both `agent` and `agent_path` are specified, " "but at most only one should be." ) if agent is not None: if agent not in AGENT_TO_CLASS: raise ValueError( f"Got unknown agent type: {agent}. "
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,756
Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info - Langchain: 0.0.215 - Platform: ubuntu - Python 3.10.12 ### Who can help? @vowelparrot https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Fails if agent initialized as follows: ```python agent = initialize_agent( agent='zero-shot-react-description', tools=tools, llm=llm, verbose=True, max_iterations=30, memory=ConversationBufferMemory(), handle_parsing_errors=True) ``` With ``` ... lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent tags_.append(agent.value) AttributeError: 'str' object has no attribute 'value' ```` ### Expected behavior Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown).
https://github.com/langchain-ai/langchain/issues/6756
https://github.com/langchain-ai/langchain/pull/6765
ba622764cb7ccf4667878289f959857348ef8c19
6d30acffcbea5807835839585132d3946bb81661
"2023-06-26T11:00:29Z"
python
"2023-06-26T16:28:11Z"
langchain/agents/initialize.py
f"Valid types are: {AGENT_TO_CLASS.keys()}." ) tags_.append(agent.value) agent_cls = AGENT_TO_CLASS[agent] agent_kwargs = agent_kwargs or {} agent_obj = agent_cls.from_llm_and_tools( llm, tools, callback_manager=callback_manager, **agent_kwargs ) elif agent_path is not None: agent_obj = load_agent( agent_path, llm=llm, tools=tools, callback_manager=callback_manager ) try: tags_.append(agent_obj._agent_type) except NotImplementedError: pass else: raise ValueError( "Somehow both `agent` and `agent_path` are None, " "this should never happen." ) return AgentExecutor.from_agent_and_tools( agent=agent_obj, tools=tools, callback_manager=callback_manager, tags=tags_, **kwargs, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/tools/jira/prompt.py
JIRA_ISSUE_CREATE_PROMPT = """ This tool is a wrapper around atlassian-python-api's Jira issue_create API, useful when you need to create a Jira issue. The input to this tool is a dictionary specifying the fields of the Jira issue, and will be passed into atlassian-python-api's Jira `issue_create` function. For example, to create a low priority task called "test issue" with description "test description", you would pass in the following dictionary: {{"summary": "test issue", "description": "test description", "issuetype": {{"name": "Task"}}, "priority": {{"name": "Low"}}}} """ JIRA_GET_ALL_PROJECTS_PROMPT = """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/tools/jira/prompt.py
This tool is a wrapper around atlassian-python-api's Jira project API, useful when you need to fetch all the projects the user has access to, find out how many projects there are, or as an intermediary step that involv searching by projects. there is no input to this tool. """ JIRA_JQL_PROMPT = """ This tool is a wrapper around atlassian-python-api's Jira jql API, useful when you need to search for Jira issues. The input to this tool is a JQL query string, and will be passed into atlassian-python-api's Jira `jql` function, For example, to find all the issues in project "Test" assigned to the me, you would pass in the following string: project = Test AND assignee = currentUser() or to find issues with summaries that contain the word "test", you would pass in the following string: summary ~ 'test' """ JIRA_CATCH_ALL_PROMPT = """ This tool is a wrapper around atlassian-python-api's Jira API. There are other dedicated tools for fetching all projects, and creating and searching for issues, use this tool if you need to perform any other actions allowed by the atlassian-python-api Jira API. The input to this tool is line of python code that calls a function from atlassian-python-api's Jira API For example, to update the summary field of an issue, you would pass in the following string: self.jira.update_issue_field(key, {{"summary": "New summary"}}) or to find out how many projects are in the Jira instance, you would pass in the following string: self.jira.projects() For more information on the Jira API, refer to https://atlassian-python-api.readthedocs.io/jira.html """ JIRA_CONFLUENCE_PAGE_CREATE_PROMPT = """This tool is a wrapper around atlassian-python-api's Confluence atlassian-python-api API, useful when you need to create a Confluence page. The input to this tool is a dictionary specifying the fields of the Confluence page, and will be passed into atlassian-python-api's Confluence `create_page` function. For example, to create a page in the DEMO space titled "This is the title" with body "This is the body. You can use <strong>HTML tags</strong>!", you would pass in the following dictionary: {{"space": "DEMO", "title":"This is the title","body":"This is the body. You can use <strong>HTML tags</strong>!"}} """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
"""Util that calls Jira.""" from typing import Any, Dict, List, Optional from pydantic import BaseModel, Extra, root_validator from langchain.tools.jira.prompt import ( JIRA_CATCH_ALL_PROMPT, JIRA_CONFLUENCE_PAGE_CREATE_PROMPT, JIRA_GET_ALL_PROJECTS_PROMPT, JIRA_ISSUE_CREATE_PROMPT, JIRA_JQL_PROMPT, ) from langchain.utils import get_from_dict_or_env class JiraAPIWrapper(BaseModel): """Wrapper for Jira API.""" jira: Any confluence: Any jira_username: Optional[str] = None
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
jira_api_token: Optional[str] = None jira_instance_url: Optional[str] = None operations: List[Dict] = [ { "mode": "jql", "name": "JQL Query", "description": JIRA_JQL_PROMPT, }, { "mode": "get_projects", "name": "Get Projects", "description": JIRA_GET_ALL_PROJECTS_PROMPT, }, { "mode": "create_issue", "name": "Create Issue", "description": JIRA_ISSUE_CREATE_PROMPT, }, { "mode": "other", "name": "Catch all Jira API call", "description": JIRA_CATCH_ALL_PROMPT, }, { "mode": "create_page", "name": "Create confluence page", "description": JIRA_CONFLUENCE_PAGE_CREATE_PROMPT, }, ] class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
"""Configuration for this pydantic object.""" extra = Extra.forbid def list(self) -> List[Dict]: return self.operations @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" jira_username = get_from_dict_or_env(values, "jira_username", "JIRA_USERNAME") values["jira_username"] = jira_username jira_api_token = get_from_dict_or_env( values, "jira_api_token", "JIRA_API_TOKEN" ) values["jira_api_token"] = jira_api_token jira_instance_url = get_from_dict_or_env( values, "jira_instance_url", "JIRA_INSTANCE_URL" ) values["jira_instance_url"] = jira_instance_url try: from atlassian import Confluence, Jira except ImportError: raise ImportError( "atlassian-python-api is not installed. " "Please install it with `pip install atlassian-python-api`" ) jira = Jira( url=jira_instance_url, username=jira_username,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
password=jira_api_token, cloud=True, ) confluence = Confluence( url=jira_instance_url, username=jira_username, password=jira_api_token, cloud=True, ) values["jira"] = jira values["confluence"] = confluence return values def parse_issues(self, issues: Dict) -> List[dict]: parsed = [] for issue in issues["issues"]: key = issue["key"] summary = issue["fields"]["summary"] created = issue["fields"]["created"][0:10] priority = issue["fields"]["priority"]["name"] status = issue["fields"]["status"]["name"] try: assignee = issue["fields"]["assignee"]["displayName"] except Exception: assignee = "None" rel_issues = {} for related_issue in issue["fields"]["issuelinks"]: if "inwardIssue" in related_issue.keys(): rel_type = related_issue["type"]["inward"] rel_key = related_issue["inwardIssue"]["key"] rel_summary = related_issue["inwardIssue"]["fields"]["summary"]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
if "outwardIssue" in related_issue.keys(): rel_type = related_issue["type"]["outward"] rel_key = related_issue["outwardIssue"]["key"] rel_summary = related_issue["outwardIssue"]["fields"]["summary"] rel_issues = {"type": rel_type, "key": rel_key, "summary": rel_summary} parsed.append( { "key": key, "summary": summary, "created": created, "assignee": assignee, "priority": priority, "status": status, "related_issues": rel_issues, } ) return parsed def parse_projects(self, projects: List[dict]) -> List[dict]: parsed = [] for project in projects: id = project["id"] key = project["key"] name = project["name"] type = project["projectTypeKey"] style = project["style"] parsed.append( {"id": id, "key": key, "name": name, "type": type, "style": style} ) return parsed def search(self, query: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
issues = self.jira.jql(query) parsed_issues = self.parse_issues(issues) parsed_issues_str = ( "Found " + str(len(parsed_issues)) + " issues:\n" + str(parsed_issues) ) return parsed_issues_str def project(self) -> str: projects = self.jira.projects() parsed_projects = self.parse_projects(projects) parsed_projects_str = ( "Found " + str(len(parsed_projects)) + " projects:\n" + str(parsed_projects) ) return parsed_projects_str def issue_create(self, query: str) -> str: try: import json except ImportError: raise ImportError( "json is not installed. Please install it with `pip install json`" ) params = json.loads(query) return self.jira.issue_create(fields=dict(params)) def page_create(self, query: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
langchain/utilities/jira.py
try: import json except ImportError: raise ImportError( "json is not installed. Please install it with `pip install json`" ) params = json.loads(query) return self.confluence.create_page(**dict(params)) def other(self, query: str) -> str: context = {"self": self} exec(f"result = {query}", context) result = context["result"] return str(result) def run(self, mode: str, query: str) -> str: if mode == "jql": return self.search(query) elif mode == "get_projects": return self.project() elif mode == "create_issue": return self.issue_create(query) elif mode == "other": return self.other(query) elif mode == "create_page": return self.page_create(query) else: raise ValueError(f"Got unexpected mode {mode}")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
tests/integration_tests/utilities/test_jira_api.py
"""Integration test for JIRA API Wrapper.""" from langchain.utilities.jira import JiraAPIWrapper def test_search() -> None: """Test for Searching issues on JIRA""" jql = "project = TP" jira = JiraAPIWrapper() output = jira.run("jql", jql) assert "issues" in output def test_getprojects() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,833
Arbitrary code execution in JiraAPIWrapper
### System Info LangChain version:0.0.171 windows 10 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Set the environment variables for jira and openai ```python import os from langchain.utilities.jira import JiraAPIWrapper os.environ["JIRA_API_TOKEN"] = "your jira api token" os.environ["JIRA_USERNAME"] = "your username" os.environ["JIRA_INSTANCE_URL"] = "your url" os.environ["OPENAI_API_KEY"] = "your openai key" ``` 2. Run jira ```python jira = JiraAPIWrapper() output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")") ``` 3. The `id` command will be executed. Commands can be change to others and attackers can execute arbitrary code. ### Expected behavior The code can be executed without any check.
https://github.com/langchain-ai/langchain/issues/4833
https://github.com/langchain-ai/langchain/pull/6992
61938a02a1e76fa6c6e8203c98a9344a179c810d
a2f191a32229256dd41deadf97786fe41ce04cbb
"2023-05-17T04:11:40Z"
python
"2023-07-05T19:56:01Z"
tests/integration_tests/utilities/test_jira_api.py
"""Test for getting projects on JIRA""" jira = JiraAPIWrapper() output = jira.run("get_projects", "") assert "projects" in output def test_create_ticket() -> None: """Test the Create Ticket Call that Creates a Issue/Ticket on JIRA.""" issue_string = ( '{"summary": "Test Summary", "description": "Test Description",' ' "issuetype": {"name": "Bug"}, "project": {"key": "TP"}}' ) jira = JiraAPIWrapper() output = jira.run("create_issue", issue_string) assert "id" in output assert "key" in output def test_create_confluence_page() -> None: """Test for getting projects on JIRA""" jira = JiraAPIWrapper() create_page_dict = ( '{"space": "ROC", "title":"This is the title",' '"body":"This is the body. You can use ' '<strong>HTML tags</strong>!"}' ) output = jira.run("create_page", create_page_dict) assert "type" in output assert "page" in output
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,365
PromptLayerChatOpenAI does not support the newest function calling feature
### System Info Python Version: 3.11 Langchain Version: 0.0.209 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: ``` llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True) predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks) ``` `predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class. ### Expected behavior Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute.
https://github.com/langchain-ai/langchain/issues/6365
https://github.com/langchain-ai/langchain/pull/6366
e0cb3ea90c1f8ec26957ffca65c6e451d444c69d
09acbb84101bc6df373ca5a1d6c8d212bd3f577f
"2023-06-18T13:00:32Z"
python
"2023-07-06T17:16:04Z"
langchain/chat_models/promptlayer_openai.py
"""PromptLayer wrapper.""" import datetime from typing import Any, List, Mapping, Optional from langchain.callbacks.manager import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun, ) from langchain.chat_models import ChatOpenAI from langchain.schema import ChatResult from langchain.schema.messages import BaseMessage class PromptLayerChatOpenAI(ChatOpenAI):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,365
PromptLayerChatOpenAI does not support the newest function calling feature
### System Info Python Version: 3.11 Langchain Version: 0.0.209 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: ``` llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True) predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks) ``` `predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class. ### Expected behavior Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute.
https://github.com/langchain-ai/langchain/issues/6365
https://github.com/langchain-ai/langchain/pull/6366
e0cb3ea90c1f8ec26957ffca65c6e451d444c69d
09acbb84101bc6df373ca5a1d6c8d212bd3f577f
"2023-06-18T13:00:32Z"
python
"2023-07-06T17:16:04Z"
langchain/chat_models/promptlayer_openai.py
"""Wrapper around OpenAI Chat large language models and PromptLayer. To use, you should have the ``openai`` and ``promptlayer`` python package installed, and the environment variable ``OPENAI_API_KEY`` and ``PROMPTLAYER_API_KEY`` set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerChatOpenAI adds to optional parameters: ``pl_tags``: List of strings to tag the request with. ``return_pl_id``: If True, the PromptLayer request ID will be returned in the ``generation_info`` field of the ``Generation`` object. Example: .. code-block:: python from langchain.chat_models import PromptLayerChatOpenAI openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo") """ pl_tags: Optional[List[str]] return_pl_id: Optional[bool] = False def _generate(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,365
PromptLayerChatOpenAI does not support the newest function calling feature
### System Info Python Version: 3.11 Langchain Version: 0.0.209 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: ``` llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True) predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks) ``` `predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class. ### Expected behavior Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute.
https://github.com/langchain-ai/langchain/issues/6365
https://github.com/langchain-ai/langchain/pull/6366
e0cb3ea90c1f8ec26957ffca65c6e451d444c69d
09acbb84101bc6df373ca5a1d6c8d212bd3f577f
"2023-06-18T13:00:32Z"
python
"2023-07-06T17:16:04Z"
langchain/chat_models/promptlayer_openai.py
self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any ) -> ChatResult: """Call ChatOpenAI generate and then call PromptLayer API to log the request.""" from promptlayer.utils import get_api_key, promptlayer_api_request request_start_time = datetime.datetime.now().timestamp() generated_responses = super()._generate(messages, stop, run_manager) request_end_time = datetime.datetime.now().timestamp() message_dicts, params = super()._create_message_dicts(messages, stop) for i, generation in enumerate(generated_responses.generations): response_dict, params = super()._create_message_dicts( [generation.message], stop ) params = {**params, **kwargs} pl_request_id = promptlayer_api_request( "langchain.PromptLayerChatOpenAI", "langchain", message_dicts, params, self.pl_tags, response_dict,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,365
PromptLayerChatOpenAI does not support the newest function calling feature
### System Info Python Version: 3.11 Langchain Version: 0.0.209 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: ``` llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True) predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks) ``` `predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class. ### Expected behavior Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute.
https://github.com/langchain-ai/langchain/issues/6365
https://github.com/langchain-ai/langchain/pull/6366
e0cb3ea90c1f8ec26957ffca65c6e451d444c69d
09acbb84101bc6df373ca5a1d6c8d212bd3f577f
"2023-06-18T13:00:32Z"
python
"2023-07-06T17:16:04Z"
langchain/chat_models/promptlayer_openai.py
request_start_time, request_end_time, get_api_key(), return_pl_id=self.return_pl_id, ) if self.return_pl_id: if generation.generation_info is None or not isinstance( generation.generation_info, dict ): generation.generation_info = {} generation.generation_info["pl_request_id"] = pl_request_id return generated_responses async def _agenerate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any ) -> ChatResult: """Call ChatOpenAI agenerate and then call PromptLayer to log.""" from promptlayer.utils import get_api_key, promptlayer_api_request_async request_start_time = datetime.datetime.now().timestamp() generated_responses = await super()._agenerate(messages, stop, run_manager) request_end_time = datetime.datetime.now().timestamp() message_dicts, params = super()._create_message_dicts(messages, stop) for i, generation in enumerate(generated_responses.generations): response_dict, params = super()._create_message_dicts( [generation.message], stop ) params = {**params, **kwargs}
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,365
PromptLayerChatOpenAI does not support the newest function calling feature
### System Info Python Version: 3.11 Langchain Version: 0.0.209 ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: ``` llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True) predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks) ``` `predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class. ### Expected behavior Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute.
https://github.com/langchain-ai/langchain/issues/6365
https://github.com/langchain-ai/langchain/pull/6366
e0cb3ea90c1f8ec26957ffca65c6e451d444c69d
09acbb84101bc6df373ca5a1d6c8d212bd3f577f
"2023-06-18T13:00:32Z"
python
"2023-07-06T17:16:04Z"
langchain/chat_models/promptlayer_openai.py
pl_request_id = await promptlayer_api_request_async( "langchain.PromptLayerChatOpenAI.async", "langchain", message_dicts, params, self.pl_tags, response_dict, request_start_time, request_end_time, get_api_key(), return_pl_id=self.return_pl_id, ) if self.return_pl_id: if generation.generation_info is None or not isinstance( generation.generation_info, dict ): generation.generation_info = {} generation.generation_info["pl_request_id"] = pl_request_id return generated_responses @property def _llm_type(self) -> str: return "promptlayer-openai-chat" @property def _identifying_params(self) -> Mapping[str, Any]: return { **super()._identifying_params, "pl_tags": self.pl_tags, "return_pl_id": self.return_pl_id, }
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
"""Wrapper around Anthropic APIs.""" import re import warnings from importlib.metadata import version from typing import Any, Callable, Dict, Generator, List, Mapping, Optional import packaging from pydantic import BaseModel, root_validator from langchain.callbacks.manager import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun, ) from langchain.llms.base import LLM from langchain.utils import get_from_dict_or_env class _AnthropicCommon(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
client: Any = None async_client: Any = None model: str = "claude-v1" """Model name to use.""" max_tokens_to_sample: int = 256 """Denotes the number of tokens to predict per generation.""" temperature: Optional[float] = None """A non-negative float that tunes the degree of randomness in generation.""" top_k: Optional[int] = None """Number of most likely tokens to consider at each step.""" top_p: Optional[float] = None """Total probability mass of tokens to consider at each step.""" streaming: bool = False """Whether to stream the results.""" default_request_timeout: Optional[float] = None """Timeout for requests to Anthropic Completion API. Default is 600 seconds.""" anthropic_api_url: Optional[str] = None anthropic_api_key: Optional[str] = None HUMAN_PROMPT: Optional[str] = None AI_PROMPT: Optional[str] = None count_tokens: Optional[Callable[[str], int]] = None @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
"""Validate that api key and python package exists in environment.""" values["anthropic_api_key"] = get_from_dict_or_env( values, "anthropic_api_key", "ANTHROPIC_API_KEY" ) values["anthropic_api_url"] = get_from_dict_or_env( values, "anthropic_api_url", "ANTHROPIC_API_URL", default="https://api.anthropic.com", )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
try: import anthropic anthropic_version = packaging.version.parse(version("anthropic")) if anthropic_version < packaging.version.parse("0.3"): raise ValueError( f"Anthropic client version must be > 0.3, got {anthropic_version}. " f"To update the client, please run " f"`pip install -U anthropic`" ) values["client"] = anthropic.Anthropic( base_url=values["anthropic_api_url"], api_key=values["anthropic_api_key"], timeout=values["default_request_timeout"], ) values["async_client"] = anthropic.AsyncAnthropic( base_url=values["anthropic_api_url"], api_key=values["anthropic_api_key"], timeout=values["default_request_timeout"], ) values["HUMAN_PROMPT"] = anthropic.HUMAN_PROMPT values["AI_PROMPT"] = anthropic.AI_PROMPT values["count_tokens"] = values["client"].count_tokens except ImportError: raise ImportError( "Could not import anthropic python package. " "Please it install it with `pip install anthropic`." ) return values @property def _default_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
"""Get the default parameters for calling Anthropic API.""" d = { "max_tokens_to_sample": self.max_tokens_to_sample, "model": self.model, } if self.temperature is not None: d["temperature"] = self.temperature if self.top_k is not None: d["top_k"] = self.top_k if self.top_p is not None: d["top_p"] = self.top_p return d @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {**{}, **self._default_params} def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]: if not self.HUMAN_PROMPT or not self.AI_PROMPT: raise NameError("Please ensure the anthropic package is loaded") if stop is None: stop = [] stop.extend([self.HUMAN_PROMPT]) return stop class Anthropic(LLM, _AnthropicCommon):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
r"""Wrapper around Anthropic's large language models. To use, you should have the ``anthropic`` python package installed, and the environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass it as a named parameter to the constructor. Example: .. code-block:: python import anthropic from langchain.llms import Anthropic model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key") # Simplest invocation, automatically wrapped with HUMAN_PROMPT # and AI_PROMPT. response = model("What are the biggest risks facing humanity?") # Or if you want to use the chat mode, build a few-shot-prompt, or # put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT: raw_prompt = "What are the biggest risks facing humanity?" prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}" response = model(prompt) """ @root_validator() def raise_warning(cls, values: Dict) -> Dict: """Raise warning that this class is deprecated.""" warnings.warn( "This Anthropic LLM is deprecated. " "Please use `from langchain.chat_models import ChatAnthropic` instead" ) return values @property def _llm_type(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
"""Return type of llm.""" return "anthropic-llm" def _wrap_prompt(self, prompt: str) -> str: if not self.HUMAN_PROMPT or not self.AI_PROMPT: raise NameError("Please ensure the anthropic package is loaded") if prompt.startswith(self.HUMAN_PROMPT): return prompt corrected_prompt, n_subs = re.subn(r"^\n*Human:", self.HUMAN_PROMPT, prompt) if n_subs == 1: return corrected_prompt return f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\n" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> str: r"""Call out to Anthropic's completion endpoint. Args: prompt: The prompt to pass into the model. stop: Optional list of stop words to use when generating. Returns: The string generated by the model. Example:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
.. code-block:: python prompt = "What are the biggest risks facing humanity?" prompt = f"\n\nHuman: {prompt}\n\nAssistant:" response = model(prompt) """ stop = self._get_anthropic_stop(stop) params = {**self._default_params, **kwargs} if self.streaming: stream_resp = self.client.completions.create( prompt=self._wrap_prompt(prompt), stop_sequences=stop, stream=True, **params, ) current_completion = "" for data in stream_resp: delta = data.completion current_completion += delta if run_manager: run_manager.on_llm_new_token( delta, ) return current_completion response = self.client.completions.create( prompt=self._wrap_prompt(prompt), stop_sequences=stop, **params, ) return response.completion async def _acall(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any, ) -> str: """Call out to Anthropic's completion endpoint asynchronously.""" stop = self._get_anthropic_stop(stop) params = {**self._default_params, **kwargs} if self.streaming: stream_resp = await self.async_client.completions.create( prompt=self._wrap_prompt(prompt), stop_sequences=stop, stream=True, **params, ) current_completion = "" async for data in stream_resp: delta = data.completion current_completion += delta if run_manager: await run_manager.on_llm_new_token(delta) return current_completion response = await self.async_client.completions.create( prompt=self._wrap_prompt(prompt), stop_sequences=stop, **params, ) return response.completion def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,283
anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version'
### System Info When I initialise ChatAnthropic(), it got the error: anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chat_models import ChatOpenAI, ChatAnthropic llm = ChatAnthropic() ### Expected behavior As shown above.
https://github.com/langchain-ai/langchain/issues/7283
https://github.com/langchain-ai/langchain/pull/7306
d642609a23219b1037f84492c2bc56777e90397a
bac56618b43912acf4970d72d2497507eb14ceb1
"2023-07-06T15:35:39Z"
python
"2023-07-06T23:35:42Z"
langchain/llms/anthropic.py
r"""Call Anthropic completion_stream and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Args: prompt: The prompt to pass into the model. stop: Optional list of stop words to use when generating. Returns: A generator representing the stream of tokens from Anthropic. Example: .. code-block:: python prompt = "Write a poem about a stream." prompt = f"\n\nHuman: {prompt}\n\nAssistant:" generator = anthropic.stream(prompt) for token in generator: yield token """ stop = self._get_anthropic_stop(stop) return self.client.completions.create( prompt=self._wrap_prompt(prompt), stop_sequences=stop, stream=True, **self._default_params, ) def get_num_tokens(self, text: str) -> int: """Calculate number of tokens.""" if not self.count_tokens: raise NameError("Please ensure the anthropic package is loaded") return self.count_tokens(text)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
"""Wrapper around Pinecone vector database.""" from __future__ import annotations import logging import uuid from typing import Any, Callable, Iterable, List, Optional, Tuple import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance logger = logging.getLogger(__name__) class Pinecone(VectorStore): """Wrapper around Pinecone vector database. To use, you should have the ``pinecone-client`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
self, index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None, ): """Initialize with Pinecone client.""" try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) if not isinstance(index, pinecone.index.Index): raise ValueError( f"client should be an instance of pinecone.index.Index, " f"got {type(index)}" ) self._index = index self._embedding_function = embedding_function self._text_key = text_key self._namespace = namespace def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. namespace: Optional pinecone namespace to add the texts to. Returns: List of ids from adding the texts into the vectorstore. """ if namespace is None: namespace = self._namespace docs = [] ids = ids or [str(uuid.uuid4()) for _ in texts] for i, text in enumerate(texts): embedding = self._embedding_function(text) metadata = metadatas[i] if metadatas else {} metadata[self._text_key] = text docs.append((ids[i], embedding, metadata)) self._index.upsert(vectors=docs, namespace=namespace, batch_size=batch_size) return ids
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
def similarity_search_with_score( self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, ) -> List[Tuple[Document, float]]: """Return pinecone documents most similar to query, along with scores. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ if namespace is None: namespace = self._namespace query_obj = self._embedding_function(query) docs = [] results = self._index.query( [query_obj], top_k=k, include_metadata=True, namespace=namespace, filter=filter, ) for res in results["matches"]: metadata = res["metadata"] if self._text_key in metadata:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
text = metadata.pop(self._text_key) score = res["score"] docs.append((Document(page_content=text, metadata=metadata), score)) else: logger.warning( f"Found document with no `{self._text_key}` key. Skipping." ) return docs def similarity_search( self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return pinecone documents most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ docs_and_scores = self.similarity_search_with_score( query, k=k, filter=filter, namespace=namespace, **kwargs ) return [doc for doc, _ in docs_and_scores] def _similarity_search_with_relevance_scores(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: kwargs.pop("score_threshold", None) return self.similarity_search_with_score(query, k, **kwargs) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if namespace is None: namespace = self._namespace results = self._index.query( [embedding], top_k=fetch_k, include_values=True, include_metadata=True, namespace=namespace, filter=filter, ) mmr_selected = maximal_marginal_relevance( np.array([embedding], dtype=np.float32), [item["values"] for item in results["matches"]], k=k, lambda_mult=lambda_mult, ) selected = [results["matches"][i]["metadata"] for i in mmr_selected] return [ Document(page_content=metadata.pop((self._text_key)), metadata=metadata) for metadata in selected ]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ embedding = self._embedding_function(query) return self.max_marginal_relevance_search_by_vector( embedding, k, fetch_k, lambda_mult, filter, namespace ) @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = "text", index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> Pinecone: """Construct Pinecone wrapper from raw documents. This is a user friendly interface that: 1. Embeds documents. 2. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
Example: .. code-block:: python from langchain import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts( texts, embeddings, index_name="langchain-demo" ) """ try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) indexes = pinecone.list_indexes() if index_name in indexes: index = pinecone.Index(index_name) elif len(indexes) == 0: raise ValueError( "No active indexes found in your Pinecone project, " "are you sure you're using the right API key and environment?" )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
else: raise ValueError( f"Index '{index_name}' not found in your Pinecone project. " f"Did you mean one of the following indexes: {', '.join(indexes)}" ) for i in range(0, len(texts), batch_size): i_end = min(i + batch_size, len(texts)) lines_batch = texts[i:i_end] if ids: ids_batch = ids[i:i_end] else: ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)] embeds = embedding.embed_documents(lines_batch) if metadatas: metadata = metadatas[i:i_end] else: metadata = [{} for _ in range(i, i_end)] for j, line in enumerate(lines_batch): metadata[j][text_key] = line to_upsert = zip(ids_batch, embeds, metadata) index.upsert(vectors=list(to_upsert), namespace=namespace) return cls(index, embedding.embed_query, text_key, namespace) @classmethod def from_existing_index(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
cls, index_name: str, embedding: Embeddings, text_key: str = "text", namespace: Optional[str] = None, ) -> Pinecone: """Load pinecone vectorstore from index name.""" try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) return cls( pinecone.Index(index_name), embedding.embed_query, text_key, namespace ) def delete(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
langchain/vectorstores/pinecone.py
self, ids: Optional[List[str]] = None, delete_all: Optional[bool] = None, namespace: Optional[str] = None, filter: Optional[dict] = None, **kwargs: Any, ) -> None: """Delete by vector IDs or filter. Args: ids: List of ids to delete. filter: Dictionary of conditions to filter vectors to delete. """ if namespace is None: namespace = self._namespace if delete_all: self._index.delete(delete_all=True, namespace=namespace, **kwargs) elif ids is not None: chunk_size = 1000 for i in range(0, len(ids), chunk_size): chunk = ids[i : i + chunk_size] self._index.delete(ids=chunk, namespace=namespace, **kwargs) elif filter is not None: self._index.delete(filter=filter, namespace=namespace, **kwargs) else: raise ValueError("Either ids, delete_all, or filter must be provided.") return None
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
tests/integration_tests/vectorstores/test_pinecone.py
import importlib import os import uuid from typing import List import pinecone import pytest from langchain.docstore.document import Document from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores.pinecone import Pinecone index_name = "langchain-test-index" namespace_name = "langchain-test-namespace" dimension = 1536 def reset_pinecone() -> None: assert os.environ.get("PINECONE_API_KEY") is not None assert os.environ.get("PINECONE_ENVIRONMENT") is not None import pinecone importlib.reload(pinecone) pinecone.init( api_key=os.environ.get("PINECONE_API_KEY"), environment=os.environ.get("PINECONE_ENVIRONMENT"), ) class TestPinecone: index: pinecone.Index @classmethod def setup_class(cls) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,472
Pinecone: Support starter tier
### Feature request Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature. ### Motivation Indexes in upcoming Pinecone V4 won't support: * namespaces * `configure_index()` * delete by metadata * `describe_index()` with metadata filtering * `metadata_config` parameter to `create_index()` * `delete()` with the `deleteAll` parameter ### Your contribution I'll do it.
https://github.com/langchain-ai/langchain/issues/7472
https://github.com/langchain-ai/langchain/pull/7473
5debd5043e61d29efea661c20818b48a0f39e5a6
9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63
"2023-07-10T10:19:16Z"
python
"2023-07-10T15:39:47Z"
tests/integration_tests/vectorstores/test_pinecone.py
reset_pinecone() cls.index = pinecone.Index(index_name) if index_name in pinecone.list_indexes(): index_stats = cls.index.describe_index_stats() if index_stats["dimension"] == dimension: index_stats = cls.index.describe_index_stats() for _namespace_name in index_stats["namespaces"].keys(): cls.index.delete(delete_all=True, namespace=_namespace_name) else: pinecone.delete_index(index_name) pinecone.create_index(name=index_name, dimension=dimension) else: pinecone.create_index(name=index_name, dimension=dimension) index_stats = cls.index.describe_index_stats() assert index_stats["dimension"] == dimension if index_stats["namespaces"].get(namespace_name) is not None: assert index_stats["namespaces"][namespace_name]["vector_count"] == 0 @classmethod def teardown_class(cls) -> None: