status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_hub.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" huggingfacehub_api_token = get_from_dict_or_env( values, "huggingfacehub_api_token", "HUGGINGFACEHUB_API_TOKEN" ) try: from huggingface_hub.inference_api import InferenceApi repo_id = values["repo_id"] client = InferenceApi( repo_id=repo_id, token=huggingfacehub_api_token, task=values.get("task"), ) if client.task not in VALID_TASKS: raise ValueError( f"Got invalid task {client.task}, " f"currently only {VALID_TASKS} are supported" ) values["client"] = client except ImportError: raise ValueError( "Could not import huggingface_hub python package. " "Please install it with `pip install huggingface_hub`." ) return values @property def _identifying_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_hub.py
"""Get the identifying parameters.""" _model_kwargs = self.model_kwargs or {} return { **{"repo_id": self.repo_id, "task": self.task}, **{"model_kwargs": _model_kwargs}, } @property def _llm_type(self) -> str: """Return type of llm.""" return "huggingface_hub" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_hub.py
"""Call out to HuggingFace Hub's inference endpoint. Args: prompt: The prompt to pass into the model. stop: Optional list of stop words to use when generating. Returns: The string generated by the model. Example: .. code-block:: python response = hf("Tell me a joke.") """ _model_kwargs = self.model_kwargs or {} response = self.client(inputs=prompt, params=_model_kwargs) if "error" in response: raise ValueError(f"Error raised by inference API: {response['error']}") if self.client.task == "text-generation": text = response[0]["generated_text"][len(prompt) :] elif self.client.task == "text2text-generation": text = response[0]["generated_text"] else: raise ValueError( f"Got invalid task {self.client.task}, " f"currently only {VALID_TASKS} are supported" ) if stop is not None: text = enforce_stop_tokens(text, stop) return text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
"""Wrapper around HuggingFace Pipeline APIs.""" import importlib.util import logging from typing import Any, List, Mapping, Optional from pydantic import Extra from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens DEFAULT_MODEL_ID = "gpt2" DEFAULT_TASK = "text-generation" VALID_TASKS = ("text2text-generation", "text-generation") logger = logging.getLogger(__name__) class HuggingFacePipeline(LLM):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
"""Wrapper around HuggingFace Pipeline API. To use, you should have the ``transformers`` python package installed. Only supports `text-generation` and `text2text-generation` for now. Example using from_model_id: .. code-block:: python from langchain.llms import HuggingFacePipeline hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation" ) Example passing pipeline in directly: .. code-block:: python from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) """ pipeline: Any model_id: str = DEFAULT_MODEL_ID """Model name to use.""" model_kwargs: Optional[dict] = None """Key word arguments to pass to the model.""" class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @classmethod def from_model_id( cls, model_id: str, task: str, device: int = -1, model_kwargs: Optional[dict] = None, **kwargs: Any, ) -> LLM: """Construct the pipeline object from model_id and task.""" try: from transformers import ( AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer, ) from transformers import pipeline as hf_pipeline except ImportError: raise ValueError( "Could not import transformers python package. " "Please install it with `pip install transformers`." ) _model_kwargs = model_kwargs or {} tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs) try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
if task == "text-generation": model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs) elif task == "text2text-generation": model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs) else: raise ValueError( f"Got invalid task {task}, " f"currently only {VALID_TASKS} are supported" ) except ImportError as e: raise ValueError( f"Could not load the {task} model due to missing dependencies." ) from e if importlib.util.find_spec("torch") is not None: import torch cuda_device_count = torch.cuda.device_count() if device < -1 or (device >= cuda_device_count): raise ValueError( f"Got device=={device}, " f"device is required to be within [-1, {cuda_device_count})" ) if device < 0 and cuda_device_count > 0: logger.warning( "Device has %d GPUs available. " "Provide device={deviceId} to `from_model_id` to use available" "GPUs for execution. deviceId is -1 (default) for CPU and " "can be a positive integer associated with CUDA device id.", cuda_device_count, ) if "trust_remote_code" in _model_kwargs:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
_model_kwargs = { k: v for k, v in _model_kwargs.items() if k != "trust_remote_code" } pipeline = hf_pipeline( task=task, model=model, tokenizer=tokenizer, device=device, model_kwargs=_model_kwargs, ) if pipeline.task not in VALID_TASKS: raise ValueError( f"Got invalid task {pipeline.task}, " f"currently only {VALID_TASKS} are supported" ) return cls( pipeline=pipeline, model_id=model_id, model_kwargs=_model_kwargs, **kwargs, ) @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return { **{"model_id": self.model_id}, **{"model_kwargs": self.model_kwargs}, } @property def _llm_type(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/huggingface_pipeline.py
return "huggingface_pipeline" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: response = self.pipeline(prompt) if self.pipeline.task == "text-generation": text = response[0]["generated_text"][len(prompt) :] elif self.pipeline.task == "text2text-generation": text = response[0]["generated_text"] else: raise ValueError( f"Got invalid task {self.pipeline.task}, " f"currently only {VALID_TASKS} are supported" ) if stop is not None: text = enforce_stop_tokens(text, stop) return text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
"""Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.""" import importlib.util import logging from typing import Any, Callable, List, Mapping, Optional from pydantic import Extra from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.self_hosted import SelfHostedPipeline from langchain.llms.utils import enforce_stop_tokens DEFAULT_MODEL_ID = "gpt2" DEFAULT_TASK = "text-generation" VALID_TASKS = ("text2text-generation", "text-generation") logger = logging.getLogger(__name__) def _generate_text(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
pipeline: Any, prompt: str, *args: Any, stop: Optional[List[str]] = None, **kwargs: Any, ) -> str: """Inference function to send to the remote hardware. Accepts a Hugging Face pipeline (or more likely, a key pointing to such a pipeline on the cluster's object store) and returns generated text. """ response = pipeline(prompt, *args, **kwargs) if pipeline.task == "text-generation": text = response[0]["generated_text"][len(prompt) :] elif pipeline.task == "text2text-generation": text = response[0]["generated_text"] else: raise ValueError( f"Got invalid task {pipeline.task}, " f"currently only {VALID_TASKS} are supported" ) if stop is not None: text = enforce_stop_tokens(text, stop) return text def _load_transformer(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
model_id: str = DEFAULT_MODEL_ID, task: str = DEFAULT_TASK, device: int = 0, model_kwargs: Optional[dict] = None, ) -> Any: """Inference function to send to the remote hardware. Accepts a huggingface model_id and returns a pipeline for the task. """ from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer from transformers import pipeline as hf_pipeline _model_kwargs = model_kwargs or {} tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs) try: if task == "text-generation": model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs) elif task == "text2text-generation": model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs) else: raise ValueError( f"Got invalid task {task}, " f"currently only {VALID_TASKS} are supported" ) except ImportError as e: raise ValueError( f"Could not load the {task} model due to missing dependencies." ) from e
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
if importlib.util.find_spec("torch") is not None: import torch cuda_device_count = torch.cuda.device_count() if device < -1 or (device >= cuda_device_count): raise ValueError( f"Got device=={device}, " f"device is required to be within [-1, {cuda_device_count})" ) if device < 0 and cuda_device_count > 0: logger.warning( "Device has %d GPUs available. " "Provide device={deviceId} to `from_model_id` to use available" "GPUs for execution. deviceId is -1 for CPU and " "can be a positive integer associated with CUDA device id.", cuda_device_count, ) pipeline = hf_pipeline( task=task, model=model, tokenizer=tokenizer, device=device, model_kwargs=_model_kwargs, ) if pipeline.task not in VALID_TASKS: raise ValueError( f"Got invalid task {pipeline.task}, " f"currently only {VALID_TASKS} are supported" ) return pipeline class SelfHostedHuggingFaceLLM(SelfHostedPipeline):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
"""Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the ``runhouse`` python package installed. Only supports `text-generation` and `text2text-generation` for now. Example using from_model_id: .. code-block:: python from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable): .. code-block:: python from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline():
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) """ model_id: str = DEFAULT_MODEL_ID """Hugging Face model_id to load the model.""" task: str = DEFAULT_TASK """Hugging Face task (either "text-generation" or "text2text-generation").""" device: int = 0 """Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.""" model_kwargs: Optional[dict] = None """Key word arguments to pass to the model.""" hardware: Any """Remote hardware to send the inference function to.""" model_reqs: List[str] = ["./", "transformers", "torch"] """Requirements to install on hardware to inference the model.""" model_load_fn: Callable = _load_transformer """Function to load the model remotely on the server.""" inference_fn: Callable = _generate_text """Inference function to send to the remote hardware.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid def __init__(self, **kwargs: Any):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
langchain/llms/self_hosted_hugging_face.py
"""Construct the pipeline remotely using an auxiliary function. The load function needs to be importable to be imported and run on the server, i.e. in a module and not a REPL or closure. Then, initialize the remote inference function. """ load_fn_kwargs = { "model_id": kwargs.get("model_id", DEFAULT_MODEL_ID), "task": kwargs.get("task", DEFAULT_TASK), "device": kwargs.get("device", 0), "model_kwargs": kwargs.get("model_kwargs", None), } super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs) @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return { **{"model_id": self.model_id}, **{"model_kwargs": self.model_kwargs}, } @property def _llm_type(self) -> str: return "selfhosted_huggingface_pipeline" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: return self.client(pipeline=self.pipeline_ref, prompt=prompt, stop=stop)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_huggingface_endpoint.py
"""Test HuggingFace API wrapper.""" import unittest from pathlib import Path import pytest from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint from langchain.llms.loading import load_llm from tests.integration_tests.llms.utils import assert_llm_equality @unittest.skip( "This test requires an inference endpoint. Tested with Hugging Face endpoints" ) def test_huggingface_endpoint_text_generation() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_huggingface_endpoint.py
"""Test valid call to HuggingFace text generation model.""" llm = HuggingFaceEndpoint( endpoint_url="", task="text-generation", model_kwargs={"max_new_tokens": 10} ) output = llm("Say foo:") print(output) assert isinstance(output, str) @unittest.skip( "This test requires an inference endpoint. Tested with Hugging Face endpoints" ) def test_huggingface_endpoint_text2text_generation() -> None: """Test valid call to HuggingFace text2text model.""" llm = HuggingFaceEndpoint(endpoint_url="", task="text2text-generation") output = llm("The capital of New York is") assert output == "Albany" def test_huggingface_endpoint_call_error() -> None: """Test valid call to HuggingFace that errors.""" llm = HuggingFaceEndpoint(model_kwargs={"max_new_tokens": -1}) with pytest.raises(ValueError): llm("Say foo:") def test_saving_loading_endpoint_llm(tmp_path: Path) -> None: """Test saving/loading an HuggingFaceHub LLM.""" llm = HuggingFaceEndpoint( endpoint_url="", task="text-generation", model_kwargs={"max_new_tokens": 10} ) llm.save(file_path=tmp_path / "hf.yaml") loaded_llm = load_llm(tmp_path / "hf.yaml") assert_llm_equality(llm, loaded_llm)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_huggingface_hub.py
"""Test HuggingFace API wrapper.""" from pathlib import Path import pytest from langchain.llms.huggingface_hub import HuggingFaceHub from langchain.llms.loading import load_llm from tests.integration_tests.llms.utils import assert_llm_equality def test_huggingface_text_generation() -> None: """Test valid call to HuggingFace text generation model.""" llm = HuggingFaceHub(repo_id="gpt2", model_kwargs={"max_new_tokens": 10}) output = llm("Say foo:") assert isinstance(output, str) def test_huggingface_text2text_generation() -> None: """Test valid call to HuggingFace text2text model.""" llm = HuggingFaceHub(repo_id="google/flan-t5-xl") output = llm("The capital of New York is") assert output == "Albany" def test_huggingface_call_error() -> None: """Test valid call to HuggingFace that errors.""" llm = HuggingFaceHub(model_kwargs={"max_new_tokens": -1}) with pytest.raises(ValueError): llm("Say foo:") def test_saving_loading_llm(tmp_path: Path) -> None: """Test saving/loading an HuggingFaceHub LLM.""" llm = HuggingFaceHub(repo_id="gpt2", model_kwargs={"max_new_tokens": 10}) llm.save(file_path=tmp_path / "hf.yaml") loaded_llm = load_llm(tmp_path / "hf.yaml") assert_llm_equality(llm, loaded_llm)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_huggingface_pipeline.py
"""Test HuggingFace Pipeline wrapper.""" from pathlib import Path from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline from langchain.llms.huggingface_pipeline import HuggingFacePipeline from langchain.llms.loading import load_llm from tests.integration_tests.llms.utils import assert_llm_equality def test_huggingface_pipeline_text_generation() -> None: """Test valid call to HuggingFace text generation model.""" llm = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", model_kwargs={"max_new_tokens": 10} ) output = llm("Say foo:") assert isinstance(output, str) def test_huggingface_pipeline_text2text_generation() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_huggingface_pipeline.py
"""Test valid call to HuggingFace text2text generation model.""" llm = HuggingFacePipeline.from_model_id( model_id="google/flan-t5-small", task="text2text-generation" ) output = llm("Say foo:") assert isinstance(output, str) def test_saving_loading_llm(tmp_path: Path) -> None: """Test saving/loading an HuggingFaceHub LLM.""" llm = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", model_kwargs={"max_new_tokens": 10} ) llm.save(file_path=tmp_path / "hf.yaml") loaded_llm = load_llm(tmp_path / "hf.yaml") assert_llm_equality(llm, loaded_llm) def test_init_with_pipeline() -> None: """Test initialization with a HF pipeline.""" model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) llm = HuggingFacePipeline(pipeline=pipe) output = llm("Say foo:") assert isinstance(output, str)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_self_hosted_llm.py
"""Test Self-hosted LLMs.""" import pickle from typing import Any, List, Optional from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline from langchain.llms import SelfHostedHuggingFaceLLM, SelfHostedPipeline model_reqs = ["pip:./", "transformers", "torch"] def get_remote_instance() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_self_hosted_llm.py
"""Get remote instance for testing.""" import runhouse as rh return rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) def test_self_hosted_huggingface_pipeline_text_generation() -> None: """Test valid call to self-hosted HuggingFace text generation model.""" gpu = get_remote_instance() llm = SelfHostedHuggingFaceLLM( model_id="gpt2", task="text-generation", model_kwargs={"n_positions": 1024}, hardware=gpu, model_reqs=model_reqs, ) output = llm("Say foo:") assert isinstance(output, str) def test_self_hosted_huggingface_pipeline_text2text_generation() -> None: """Test valid call to self-hosted HuggingFace text2text generation model.""" gpu = get_remote_instance() llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu, model_reqs=model_reqs, ) output = llm("Say foo:") assert isinstance(output, str) def load_pipeline() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_self_hosted_llm.py
"""Load pipeline for testing.""" model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipe def inference_fn(pipeline: Any, prompt: str, stop: Optional[List[str]] = None) -> str: """Inference function for testing.""" return pipeline(prompt)[0]["generated_text"] def test_init_with_local_pipeline() -> None: """Test initialization with a self-hosted HF pipeline.""" gpu = get_remote_instance() pipeline = load_pipeline() llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn, ) output = llm("Say foo:") assert isinstance(output, str) def test_init_with_pipeline_path() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,720
Add summarization task type for HuggingFace APIs
### Feature request Add summarization task type for HuggingFace APIs. This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) ### Motivation My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial. ### Your contribution I will submit a PR.
https://github.com/langchain-ai/langchain/issues/4720
https://github.com/langchain-ai/langchain/pull/4721
580861e7f206395d19cdf4896a96b1e88c6a9b5f
3f0357f94acb1e669c8f21f937e3438c6c6675a6
"2023-05-15T11:23:49Z"
python
"2023-05-15T23:26:17Z"
tests/integration_tests/llms/test_self_hosted_llm.py
"""Test initialization with a self-hosted HF pipeline.""" gpu = get_remote_instance() pipeline = load_pipeline() import runhouse as rh rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to( gpu, path="models" ) llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn, ) output = llm("Say foo:") assert isinstance(output, str) def test_init_with_pipeline_fn() -> None: """Test initialization with a self-hosted HF pipeline.""" gpu = get_remote_instance() llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn, ) output = llm("Say foo:") assert isinstance(output, str)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
"""Wrapper around Activeloop Deep Lake.""" from __future__ import annotations import logging import uuid from functools import partial from typing import Any, Callable, Dict, Iterable, List, Optional, Sequence, Tuple import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance logger = logging.getLogger(__name__) distance_metric_map = { "l2": lambda a, b: np.linalg.norm(a - b, axis=1, ord=2), "l1": lambda a, b: np.linalg.norm(a - b, axis=1, ord=1), "max": lambda a, b: np.linalg.norm(a - b, axis=1, ord=np.inf), "cos": lambda a, b: np.dot(a, b.T) / (np.linalg.norm(a) * np.linalg.norm(b, axis=1)), "dot": lambda a, b: np.dot(a, b.T), } def vector_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
query_embedding: np.ndarray, data_vectors: np.ndarray, distance_metric: str = "L2", k: Optional[int] = 4, ) -> Tuple[List, List]: """Naive search for nearest neighbors args: query_embedding: np.ndarray data_vectors: np.ndarray k (int): number of nearest neighbors distance_metric: distance function 'L2' for Euclidean, 'L1' for Nuclear, 'Max' l-infinity distnace, 'cos' for cosine similarity, 'dot' for dot product returns: nearest_indices: List, indices of nearest neighbors """ if data_vectors.shape[0] == 0: return [], [] distances = distance_metric_map[distance_metric](query_embedding, data_vectors) nearest_indices = np.argsort(distances) nearest_indices = ( nearest_indices[::-1][:k] if distance_metric in ["cos"] else nearest_indices[:k] ) return nearest_indices.tolist(), distances[nearest_indices].tolist() def dp_filter(x: dict, filter: Dict[str, str]) -> bool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
"""Filter helper function for Deep Lake""" metadata = x["metadata"].data()["value"] return all(k in metadata and v == metadata[k] for k, v in filter.items()) class DeepLake(VectorStore): """Wrapper around Deep Lake, a data lake for deep learning applications. We implement naive similarity search and filtering for fast prototyping, but it can be extended with Tensor Query Language (TQL) for production use cases over billion rows. Why Deep Lake? - Not only stores embeddings, but also the original data with version control. - Serverless, doesn't require another service and can be used with major cloud providers (S3, GCS, etc.) - More than just a multi-modal vector store. You can use the dataset to fine-tune your own LLM models. To use, you should have the ``deeplake`` python package installed. Example: .. code-block:: python from langchain.vectorstores import DeepLake from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = DeepLake("langchain_store", embeddings.embed_query) """ _LANGCHAIN_DEFAULT_DEEPLAKE_PATH = "./deeplake/" def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self, dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH, token: Optional[str] = None, embedding_function: Optional[Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 0, verbose: bool = True, **kwargs: Any, ) -> None: """Initialize with Deep Lake client.""" self.ingestion_batch_size = ingestion_batch_size self.num_workers = num_workers self.verbose = verbose try: import deeplake from deeplake.constants import MB except ImportError: raise ValueError( "Could not import deeplake python package. " "Please install it with `pip install deeplake`." ) self._deeplake = deeplake self.dataset_path = dataset_path creds_args = {"creds": kwargs["creds"]} if "creds" in kwargs else {} if ( deeplake.exists(dataset_path, token=token, **creds_args) and "overwrite" not in kwargs ):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self.ds = deeplake.load( dataset_path, token=token, read_only=read_only, verbose=self.verbose, **kwargs, ) logger.info(f"Loading deeplake {dataset_path} from storage.") if self.verbose: print( f"Deep Lake Dataset in {dataset_path} already exists, " f"loading from the storage" ) self.ds.summary() else: if "overwrite" in kwargs: del kwargs["overwrite"] self.ds = deeplake.empty( dataset_path, token=token, overwrite=True, verbose=self.verbose, **kwargs, ) with self.ds: self.ds.create_tensor( "text", htype="text", create_id_tensor=False, create_sample_info_tensor=False,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
create_shape_tensor=False, chunk_compression="lz4", ) self.ds.create_tensor( "metadata", htype="json", create_id_tensor=False, create_sample_info_tensor=False, create_shape_tensor=False, chunk_compression="lz4", ) self.ds.create_tensor( "embedding", htype="generic", dtype=np.float32, create_id_tensor=False, create_sample_info_tensor=False, max_chunk_size=64 * MB, create_shape_tensor=True, ) self.ds.create_tensor( "ids", htype="text", create_id_tensor=False, create_sample_info_tensor=False, create_shape_tensor=False, chunk_compression="lz4", ) self._embedding_function = embedding_function def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts (Iterable[str]): Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadatas. ids (Optional[List[str]], optional): Optional list of IDs. Returns: List[str]: List of IDs of the added texts. """ if ids is None: ids = [str(uuid.uuid1()) for _ in texts] text_list = list(texts) if metadatas is None: metadatas = [{}] * len(text_list) elements = list(zip(text_list, metadatas, ids)) @self._deeplake.compute def ingest(sample_in: list, sample_out: list) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
text_list = [s[0] for s in sample_in] embeds: Sequence[Optional[np.ndarray]] = [] if self._embedding_function is not None: embeddings = self._embedding_function.embed_documents(text_list) embeds = [np.array(e, dtype=np.float32) for e in embeddings] else: embeds = [None] * len(text_list) for s, e in zip(sample_in, embeds): sample_out.append( { "text": s[0], "metadata": s[1], "ids": s[2], "embedding": e, } ) batch_size = min(self.ingestion_batch_size, len(elements)) if batch_size == 0: return [] batched = [ elements[i : i + batch_size] for i in range(0, len(elements), batch_size) ] ingest().eval( batched, self.ds, num_workers=min(self.num_workers, len(batched) // max(self.num_workers, 1)),
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
**kwargs, ) self.ds.commit(allow_empty=True) if self.verbose: self.ds.summary() return ids def _search_helper( self, query: Any[str, None] = None, embedding: Any[float, None] = None, k: int = 4, distance_metric: str = "L2", use_maximal_marginal_relevance: Optional[bool] = False, fetch_k: Optional[int] = 20, filter: Optional[Any[Dict[str, str], Callable, str]] = None, return_score: Optional[bool] = False, **kwargs: Any, ) -> Any[List[Document], List[Tuple[Document, float]]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. embedding: Embedding function to use. Defaults to None. k: Number of Documents to return. Defaults to 4. distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity distance, `cos` for cosine similarity, 'dot' for dot product. Defaults to `L2`. filter: Attribute filter by metadata example {'key': 'value'}. It can also take [Deep Lake filter] (https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter) Defaults to None.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
maximal_marginal_relevance: Whether to use maximal marginal relevance. Defaults to False. fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. return_score: Whether to return the score. Defaults to False. Returns: List of Documents selected by the specified distance metric, if return_score True, return a tuple of (Document, score) """ view = self.ds if filter is not None: if isinstance(filter, dict): filter = partial(dp_filter, filter=filter) view = view.filter(filter) if len(view) == 0: return [] if self._embedding_function is None: view = view.filter(lambda x: query in x["text"].data()["value"]) scores = [1.0] * len(view) if use_maximal_marginal_relevance: raise ValueError( "For MMR search, you must specify an embedding function on" "creation." ) else: emb = embedding or self._embedding_function.embed_query( query ) query_emb = np.array(emb, dtype=np.float32)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
embeddings = view.embedding.numpy(fetch_chunks=True) k_search = fetch_k if use_maximal_marginal_relevance else k indices, scores = vector_search( query_emb, embeddings, k=k_search, distance_metric=distance_metric.lower(), ) view = view[indices] if use_maximal_marginal_relevance: lambda_mult = kwargs.get("lambda_mult", 0.5) indices = maximal_marginal_relevance( query_emb, embeddings[indices], k=min(k, len(indices)), lambda_mult=lambda_mult, ) view = view[indices] scores = [scores[i] for i in indices] docs = [ Document( page_content=el["text"].data()["value"], metadata=el["metadata"].data()["value"], ) for el in view ] if return_score: return [(doc, score) for doc, score in zip(docs, scores)] return docs def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: text to embed and run the query on. k: Number of Documents to return. Defaults to 4. query: Text to look up documents similar to. embedding: Embedding function to use. Defaults to None. k: Number of Documents to return. Defaults to 4. distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity distance, `cos` for cosine similarity, 'dot' for dot product Defaults to `L2`. filter: Attribute filter by metadata example {'key': 'value'}. Defaults to None. maximal_marginal_relevance: Whether to use maximal marginal relevance. Defaults to False. fetch_k: Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. return_score: Whether to return the score. Defaults to False. Returns: List of Documents most similar to the query vector. """ return self._search_helper(query=query, k=k, **kwargs) def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self, embedding: List[float], k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to embedding vector. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query vector. """ return self._search_helper(embedding=embedding, k=k, **kwargs) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
self, query: str, distance_metric: str = "L2", k: int = 4, filter: Optional[Dict[str, str]] = None, ) -> List[Tuple[Document, float]]: """Run similarity search with Deep Lake with distance returned. Args: query (str): Query text to search for. distance_metric: `L2` for Euclidean, `L1` for Nuclear, `max` L-infinity distance, `cos` for cosine similarity, 'dot' for dot product. Defaults to `L2`. k (int): Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Tuple[Document, float]]: List of documents most similar to the query text with distance in float. """ return self._search_helper( query=query, k=k, filter=filter, return_score=True, distance_metric=distance_metric, ) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ return self._search_helper( embedding=embedding, k=k, fetch_k=fetch_k, use_maximal_marginal_relevance=True, lambda_mult=lambda_mult, **kwargs, ) def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding_function is None: raise ValueError( "For MMR search, you must specify an embedding function on" "creation." ) return self._search_helper( query=query, k=k, fetch_k=fetch_k, use_maximal_marginal_relevance=True, lambda_mult=lambda_mult, **kwargs, ) @classmethod
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
def from_texts( cls, texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = _LANGCHAIN_DEFAULT_DEEPLAKE_PATH, **kwargs: Any, ) -> DeepLake: """Create a Deep Lake dataset from a raw documents. If a dataset_path is specified, the dataset will be persisted in that location, otherwise by default at `./deeplake` Args: path (str, pathlib.Path): - The full path to the dataset. Can be: - Deep Lake cloud path of the form ``hub://username/dataset_name``. To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use 'activeloop login' from command line) - AWS S3 path of the form ``s3://bucketname/path/to/dataset``. Credentials are required in either the environment - Google Cloud Storage path of the form ``gcs://bucketname/path/to/dataset``Credentials are required in either the environment - Local file system path of the form ``./path/to/dataset`` or ``~/path/to/dataset`` or ``path/to/dataset``. - In-memory path of the form ``mem://path/to/dataset`` which doesn't save the dataset, but keeps it in memory instead. Should be used only for testing as it does not persist. documents (List[Document]): List of documents to add. embedding (Optional[Embeddings]): Embedding function. Defaults to None.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None. ids (Optional[List[str]]): List of document IDs. Defaults to None. Returns: DeepLake: Deep Lake dataset. """ deeplake_dataset = cls( dataset_path=dataset_path, embedding_function=embedding, **kwargs ) deeplake_dataset.add_texts(texts=texts, metadatas=metadatas, ids=ids) return deeplake_dataset def delete( self, ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None, ) -> bool: """Delete the entities in the dataset Args: ids (Optional[List[str]], optional): The document_ids to delete. Defaults to None. filter (Optional[Dict[str, str]], optional): The filter to delete by. Defaults to None. delete_all (Optional[bool], optional): Whether to drop the dataset. Defaults to None. """ if delete_all: self.ds.delete(large_ok=True) return True view = None if ids:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
langchain/vectorstores/deeplake.py
view = self.ds.filter(lambda x: x["ids"].data()["value"] in ids) ids = list(view.sample_indices) if filter: if view is None: view = self.ds view = view.filter(partial(dp_filter, filter=filter)) ids = list(view.sample_indices) with self.ds: for id in sorted(ids)[::-1]: self.ds.pop(id) self.ds.commit(f"deleted {len(ids)} samples", allow_empty=True) return True @classmethod def force_delete_by_path(cls, path: str) -> None: """Force delete dataset by path""" try: import deeplake except ImportError: raise ValueError( "Could not import deeplake python package. " "Please install it with `pip install deeplake`." ) deeplake.delete(path, large_ok=True, force=True) def delete_dataset(self) -> None: """Delete the collection.""" self.delete(delete_all=True) def persist(self) -> None: """Persist the collection.""" self.ds.flush()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
"""Test Deep Lake functionality.""" import deeplake import pytest from pytest import FixtureRequest from langchain.docstore.document import Document from langchain.vectorstores import DeepLake from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings @pytest.fixture def deeplake_datastore() -> DeepLake: texts = ["foo", "bar", "baz"] metadatas = [{"page": str(i)} for i in range(len(texts))] docsearch = DeepLake.from_texts( dataset_path="mem://test_path", texts=texts, metadatas=metadatas, embedding=FakeEmbeddings(), ) return docsearch @pytest.fixture(params=["L1", "L2", "max", "cos"]) def distance_metric(request: FixtureRequest) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
return request.param def test_deeplake() -> None: """Test end to end construction and search.""" texts = ["foo", "bar", "baz"] docsearch = DeepLake.from_texts( dataset_path="mem://test_path", texts=texts, embedding=FakeEmbeddings() ) output = docsearch.similarity_search("foo", k=1) assert output == [Document(page_content="foo")] def test_deeplake_with_metadatas() -> None: """Test end to end construction and search.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": str(i)} for i in range(len(texts))] docsearch = DeepLake.from_texts( dataset_path="mem://test_path", texts=texts, embedding=FakeEmbeddings(), metadatas=metadatas, ) output = docsearch.similarity_search("foo", k=1) assert output == [Document(page_content="foo", metadata={"page": "0"})] def test_deeplakewith_persistence() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
"""Test end to end construction and search, with persistence.""" dataset_path = "./tests/persist_dir" if deeplake.exists(dataset_path): deeplake.delete(dataset_path) texts = ["foo", "bar", "baz"] docsearch = DeepLake.from_texts( dataset_path=dataset_path, texts=texts, embedding=FakeEmbeddings(), ) output = docsearch.similarity_search("foo", k=1) assert output == [Document(page_content="foo")] docsearch.persist() docsearch = DeepLake( dataset_path=dataset_path, embedding_function=FakeEmbeddings(), ) output = docsearch.similarity_search("foo", k=1) docsearch.delete_dataset() def test_similarity_search(deeplake_datastore: DeepLake, distance_metric: str) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
"""Test similarity search.""" output = deeplake_datastore.similarity_search( "foo", k=1, distance_metric=distance_metric ) assert output == [Document(page_content="foo", metadata={"page": "0"})] deeplake_datastore.delete_dataset() def test_similarity_search_by_vector( deeplake_datastore: DeepLake, distance_metric: str ) -> None: """Test similarity search by vector.""" embeddings = FakeEmbeddings().embed_documents(["foo", "bar", "baz"]) output = deeplake_datastore.similarity_search_by_vector( embeddings[1], k=1, distance_metric=distance_metric ) assert output == [Document(page_content="bar", metadata={"page": "1"})] deeplake_datastore.delete_dataset() def test_similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
deeplake_datastore: DeepLake, distance_metric: str ) -> None: """Test similarity search with score.""" output, score = deeplake_datastore.similarity_search_with_score( "foo", k=1, distance_metric=distance_metric )[0] assert output == Document(page_content="foo", metadata={"page": "0"}) if distance_metric == "cos": assert score == 1.0 else: assert score == 0.0 deeplake_datastore.delete_dataset() def test_similarity_search_with_filter( deeplake_datastore: DeepLake, distance_metric: str ) -> None: """Test similarity search.""" output = deeplake_datastore.similarity_search( "foo", k=1, distance_metric=distance_metric, filter={"page": "1"} ) assert output == [Document(page_content="bar", metadata={"page": "1"})] deeplake_datastore.delete_dataset() def test_max_marginal_relevance_search(deeplake_datastore: DeepLake) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,682
Setting overwrite to False on DeepLake constructor still overwrites
### System Info Langchain 0.0.168, Python 3.11.3 ### Who can help? @anihamde ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False) ### Expected behavior Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly.
https://github.com/langchain-ai/langchain/issues/4682
https://github.com/langchain-ai/langchain/pull/4683
8bb32d77d0703665d498e4d9bcfafa14d202d423
03ac39368fe60201a3f071d7d360c39f59c77cbf
"2023-05-14T19:15:22Z"
python
"2023-05-16T00:39:16Z"
tests/integration_tests/vectorstores/test_deeplake.py
"""Test max marginal relevance search by vector.""" output = deeplake_datastore.max_marginal_relevance_search("foo", k=1, fetch_k=2) assert output == [Document(page_content="foo", metadata={"page": "0"})] embeddings = FakeEmbeddings().embed_documents(["foo", "bar", "baz"]) output = deeplake_datastore.max_marginal_relevance_search_by_vector( embeddings[0], k=1, fetch_k=2 ) assert output == [Document(page_content="foo", metadata={"page": "0"})] deeplake_datastore.delete_dataset() def test_delete_dataset_by_ids(deeplake_datastore: DeepLake) -> None: """Test delete dataset.""" id = deeplake_datastore.ds.ids.data()["value"][0] deeplake_datastore.delete(ids=[id]) assert deeplake_datastore.similarity_search("foo", k=1, filter={"page": "0"}) == [] assert len(deeplake_datastore.ds) == 2 deeplake_datastore.delete_dataset() def test_delete_dataset_by_filter(deeplake_datastore: DeepLake) -> None: """Test delete dataset.""" deeplake_datastore.delete(filter={"page": "1"}) assert deeplake_datastore.similarity_search("bar", k=1, filter={"page": "1"}) == [] assert len(deeplake_datastore.ds) == 2 deeplake_datastore.delete_dataset() def test_delete_by_path(deeplake_datastore: DeepLake) -> None: """Test delete dataset.""" path = deeplake_datastore.dataset_path DeepLake.force_delete_by_path(path) assert not deeplake.exists(path)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
from __future__ import annotations import asyncio import functools import logging import os import warnings from contextlib import contextmanager from contextvars import ContextVar from typing import Any, Dict, Generator, List, Optional, Type, TypeVar, Union, cast from uuid import UUID, uuid4 from langchain.callbacks.base import ( BaseCallbackHandler, BaseCallbackManager, ChainManagerMixin, LLMManagerMixin, RunManagerMixin, ToolManagerMixin,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
) from langchain.callbacks.openai_info import OpenAICallbackHandler from langchain.callbacks.stdout import StdOutCallbackHandler from langchain.callbacks.tracers.langchain import LangChainTracer from langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1, TracerSessionV1 from langchain.callbacks.tracers.schemas import TracerSession from langchain.schema import ( AgentAction, AgentFinish, BaseMessage, LLMResult, get_buffer_string, ) logger = logging.getLogger(__name__) Callbacks = Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] openai_callback_var: ContextVar[Optional[OpenAICallbackHandler]] = ContextVar( "openai_callback", default=None ) tracing_callback_var: ContextVar[ Optional[LangChainTracerV1] ] = ContextVar( "tracing_callback", default=None ) tracing_v2_callback_var: ContextVar[ Optional[LangChainTracer] ] = ContextVar( "tracing_callback_v2", default=None ) @contextmanager def get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Get OpenAI callback handler in a context manager.""" cb = OpenAICallbackHandler() openai_callback_var.set(cb) yield cb openai_callback_var.set(None) @contextmanager def tracing_enabled( session_name: str = "default", ) -> Generator[TracerSessionV1, None, None]: """Get Tracer in a context manager.""" cb = LangChainTracerV1() session = cast(TracerSessionV1, cb.load_session(session_name)) tracing_callback_var.set(cb) yield session tracing_callback_var.set(None) @contextmanager def tracing_v2_enabled(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
session_name: Optional[str] = None, *, example_id: Optional[Union[str, UUID]] = None, tenant_id: Optional[str] = None, session_extra: Optional[Dict[str, Any]] = None, ) -> Generator[TracerSession, None, None]: """Get the experimental tracer handler in a context manager.""" warnings.warn( "The experimental tracing v2 is in development. " "This is not yet stable and may change in the future." ) if isinstance(example_id, str): example_id = UUID(example_id) cb = LangChainTracer( tenant_id=tenant_id, session_name=session_name, example_id=example_id, session_extra=session_extra, ) session = cb.ensure_session() tracing_v2_callback_var.set(cb) yield session tracing_v2_callback_var.set(None) def _handle_event(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
handlers: List[BaseCallbackHandler], event_name: str, ignore_condition_name: Optional[str], *args: Any, **kwargs: Any, ) -> None: """Generic event handler for CallbackManager.""" message_strings: Optional[List[str]] = None for handler in handlers: try: if ignore_condition_name is None or not getattr( handler, ignore_condition_name ): getattr(handler, event_name)(*args, **kwargs) except NotImplementedError as e: if event_name == "on_chat_model_start": if message_strings is None: message_strings = [get_buffer_string(m) for m in args[1]] _handle_event( [handler], "on_llm_start",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"ignore_llm", args[0], message_strings, *args[2:], **kwargs, ) else: logger.warning(f"Error in {event_name} callback: {e}") except Exception as e: logging.warning(f"Error in {event_name} callback: {e}") async def _ahandle_event_for_handler( handler: BaseCallbackHandler, event_name: str, ignore_condition_name: Optional[str], *args: Any, **kwargs: Any, ) -> None: try: if ignore_condition_name is None or not getattr(handler, ignore_condition_name): event = getattr(handler, event_name) if asyncio.iscoroutinefunction(event): await event(*args, **kwargs) else: await asyncio.get_event_loop().run_in_executor( None, functools.partial(event, *args, **kwargs) ) except NotImplementedError as e: if event_name == "on_chat_model_start": message_strings = [get_buffer_string(m) for m in args[1]] await _ahandle_event_for_handler(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
handler, "on_llm", "ignore_llm", args[0], message_strings, *args[2:], **kwargs, ) else: logger.warning(f"Error in {event_name} callback: {e}") except Exception as e: logger.warning(f"Error in {event_name} callback: {e}") async def _ahandle_event( handlers: List[BaseCallbackHandler], event_name: str, ignore_condition_name: Optional[str], *args: Any, **kwargs: Any, ) -> None: """Generic event handler for AsyncCallbackManager.""" await asyncio.gather( *( _ahandle_event_for_handler( handler, event_name, ignore_condition_name, *args, **kwargs ) for handler in handlers ) ) BRM = TypeVar("BRM", bound="BaseRunManager") class BaseRunManager(RunManagerMixin):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Base class for run manager (a bound callback manager).""" def __init__( self, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, ) -> None: """Initialize run manager.""" self.run_id = run_id self.handlers = handlers self.inheritable_handlers = inheritable_handlers self.parent_run_id = parent_run_id @classmethod def get_noop_manager(cls: Type[BRM]) -> BRM: """Return a manager that doesn't perform any operations.""" return cls(uuid4(), [], []) class RunManager(BaseRunManager):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Sync Run Manager.""" def on_text( self, text: str, **kwargs: Any, ) -> Any: """Run when text is received.""" _handle_event( self.handlers, "on_text", None, text, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class AsyncRunManager(BaseRunManager):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Async Run Manager.""" async def on_text( self, text: str, **kwargs: Any, ) -> Any: """Run when text is received.""" await _ahandle_event( self.handlers, "on_text", None, text, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class CallbackManagerForLLMRun(RunManager, LLMManagerMixin):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Callback manager for LLM run.""" def on_llm_new_token( self, token: str, **kwargs: Any, ) -> None: """Run when LLM generates a new token.""" _handle_event( self.handlers, "on_llm_new_token", "ignore_llm", token=token, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when LLM ends running.""" _handle_event( self.handlers, "on_llm_end", "ignore_llm", response, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when LLM errors.""" _handle_event( self.handlers, "on_llm_error", "ignore_llm", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class AsyncCallbackManagerForLLMRun(AsyncRunManager, LLMManagerMixin):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Async callback manager for LLM run.""" async def on_llm_new_token( self, token: str, **kwargs: Any, ) -> None: """Run when LLM generates a new token.""" await _ahandle_event( self.handlers, "on_llm_new_token", "ignore_llm", token, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: """Run when LLM ends running.""" await _ahandle_event( self.handlers, "on_llm_end", "ignore_llm", response, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_llm_error(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when LLM errors.""" await _ahandle_event( self.handlers, "on_llm_error", "ignore_llm", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class CallbackManagerForChainRun(RunManager, ChainManagerMixin): """Callback manager for chain run.""" def get_child(self) -> CallbackManager: """Get a child callback manager.""" manager = CallbackManager([], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) return manager def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when chain ends running.""" _handle_event( self.handlers, "on_chain_end", "ignore_chain", outputs, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when chain errors.""" _handle_event( self.handlers, "on_chain_error", "ignore_chain", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when agent action is received.""" _handle_event( self.handlers, "on_agent_action", "ignore_agent", action, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run when agent finish is received.""" _handle_event( self.handlers, "on_agent_finish", "ignore_agent", finish, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class AsyncCallbackManagerForChainRun(AsyncRunManager, ChainManagerMixin): """Async callback manager for chain run.""" def get_child(self) -> AsyncCallbackManager: """Get a child callback manager.""" manager = AsyncCallbackManager([], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) return manager async def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when chain ends running.""" await _ahandle_event( self.handlers, "on_chain_end", "ignore_chain", outputs, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_chain_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when chain errors.""" await _ahandle_event( self.handlers, "on_chain_error", "ignore_chain", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when agent action is received.""" await _ahandle_event( self.handlers, "on_agent_action", "ignore_agent", action, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any: """Run when agent finish is received.""" await _ahandle_event( self.handlers, "on_agent_finish", "ignore_agent", finish, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class CallbackManagerForToolRun(RunManager, ToolManagerMixin):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Callback manager for tool run.""" def get_child(self) -> CallbackManager: """Get a child callback manager.""" manager = CallbackManager([], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) return manager def on_tool_end( self, output: str, **kwargs: Any, ) -> None: """Run when tool ends running.""" _handle_event( self.handlers, "on_tool_end", "ignore_agent", output, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) def on_tool_error(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when tool errors.""" _handle_event( self.handlers, "on_tool_error", "ignore_agent", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class AsyncCallbackManagerForToolRun(AsyncRunManager, ToolManagerMixin): """Async callback manager for tool run.""" def get_child(self) -> AsyncCallbackManager: """Get a child callback manager.""" manager = AsyncCallbackManager([], parent_run_id=self.run_id) manager.set_handlers(self.inheritable_handlers) return manager async def on_tool_end(self, output: str, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Run when tool ends running.""" await _ahandle_event( self.handlers, "on_tool_end", "ignore_agent", output, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) async def on_tool_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any, ) -> None: """Run when tool errors.""" await _ahandle_event( self.handlers, "on_tool_error", "ignore_agent", error, run_id=self.run_id, parent_run_id=self.parent_run_id, **kwargs, ) class CallbackManager(BaseCallbackManager):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Callback manager that can be used to handle callbacks from langchain.""" def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForLLMRun: """Run when LLM starts running.""" if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_llm_start", "ignore_llm", serialized, prompts, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return CallbackManagerForLLMRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) def on_chat_model_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForLLMRun: """Run when LLM starts running.""" if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_chat_model_start", "ignore_chat_model", serialized, messages, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return CallbackManagerForLLMRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) def on_chain_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForChainRun: """Run when chain starts running.""" if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_chain_start", "ignore_chain", serialized, inputs, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return CallbackManagerForChainRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) def on_tool_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> CallbackManagerForToolRun: """Run when tool starts running.""" if run_id is None: run_id = uuid4() _handle_event( self.handlers, "on_tool_start", "ignore_agent", serialized, input_str, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return CallbackManagerForToolRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) @classmethod def configure(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
cls, inheritable_callbacks: Callbacks = None, local_callbacks: Callbacks = None, verbose: bool = False, ) -> CallbackManager: """Configure the callback manager.""" return _configure(cls, inheritable_callbacks, local_callbacks, verbose) class AsyncCallbackManager(BaseCallbackManager):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
"""Async callback manager that can be used to handle callbacks from LangChain.""" @property def is_async(self) -> bool: """Return whether the handler is async.""" return True async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForLLMRun: """Run when LLM starts running.""" if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_llm_start", "ignore_llm", serialized, prompts, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return AsyncCallbackManagerForLLMRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) async def on_chat_model_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], run_id: Optional[UUID] = None, **kwargs: Any, ) -> Any: if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_chat_model_start", "ignore_chat_model", serialized, messages, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return AsyncCallbackManagerForLLMRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) async def on_chain_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], inputs: Dict[str, Any], run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForChainRun: """Run when chain starts running.""" if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_chain_start", "ignore_chain", serialized, inputs, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return AsyncCallbackManagerForChainRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) async def on_tool_start(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
self, serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any, ) -> AsyncCallbackManagerForToolRun: """Run when tool starts running.""" if run_id is None: run_id = uuid4() await _ahandle_event( self.handlers, "on_tool_start", "ignore_agent", serialized, input_str, run_id=run_id, parent_run_id=self.parent_run_id, **kwargs, ) return AsyncCallbackManagerForToolRun( run_id, self.handlers, self.inheritable_handlers, self.parent_run_id ) @classmethod def configure(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
cls, inheritable_callbacks: Callbacks = None, local_callbacks: Callbacks = None, verbose: bool = False, ) -> AsyncCallbackManager: """Configure the callback manager.""" return _configure(cls, inheritable_callbacks, local_callbacks, verbose) T = TypeVar("T", CallbackManager, AsyncCallbackManager) def _configure( callback_manager_cls: Type[T], inheritable_callbacks: Callbacks = None, local_callbacks: Callbacks = None, verbose: bool = False, ) -> T: """Configure the callback manager.""" callback_manager = callback_manager_cls([]) if inheritable_callbacks or local_callbacks: if isinstance(inheritable_callbacks, list) or inheritable_callbacks is None: inheritable_callbacks_ = inheritable_callbacks or [] callback_manager = callback_manager_cls( handlers=inheritable_callbacks_.copy(), inheritable_handlers=inheritable_callbacks_.copy(), ) else: callback_manager = callback_manager_cls( handlers=inheritable_callbacks.handlers,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
inheritable_handlers=inheritable_callbacks.inheritable_handlers, parent_run_id=inheritable_callbacks.parent_run_id, ) local_handlers_ = ( local_callbacks if isinstance(local_callbacks, list) else (local_callbacks.handlers if local_callbacks else []) ) for handler in local_handlers_: callback_manager.add_handler(handler, False) tracer = tracing_callback_var.get() open_ai = openai_callback_var.get() tracing_enabled_ = ( os.environ.get("LANGCHAIN_TRACING") is not None or tracer is not None or os.environ.get("LANGCHAIN_HANDLER") is not None ) tracer_v2 = tracing_v2_callback_var.get() tracing_v2_enabled_ = ( os.environ.get("LANGCHAIN_TRACING_V2") is not None or tracer_v2 is not None ) tracer_session = os.environ.get("LANGCHAIN_SESSION") if tracer_session is None: tracer_session = "default" if verbose or tracing_enabled_ or tracing_v2_enabled_ or open_ai is not None: if verbose and not any( isinstance(handler, StdOutCallbackHandler) for handler in callback_manager.handlers ): callback_manager.add_handler(StdOutCallbackHandler(), False)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,714
Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'
### System Info langchain version:0.0.168 python version 3.10 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'" This code can run at version 0.0.164 ```python class Chain: def __init__(self): self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()]) self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()]) self.qa_stream = None self.qa = None self.make_chain() def make_chain(self): chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1, streaming=True, callback_manager=self.cb_mngr_aiter), chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}), chain_type_kwargs=chain_type_kwargs, return_source_documents=True) self.qa = qa self.qa_stream = qa_stream ``` call function ```python resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error ``` ### Expected behavior self.qa_stream return result like self.qa,or like langchain version 0.0.164
https://github.com/langchain-ai/langchain/issues/4714
https://github.com/langchain-ai/langchain/pull/4717
bf0904b676f458386096a008155ffeb805bc52c5
2e43954bc31dc5e23c7878149c0e061c444416a7
"2023-05-15T06:30:00Z"
python
"2023-05-16T01:36:21Z"
langchain/callbacks/manager.py
if tracing_enabled_ and not any( isinstance(handler, LangChainTracerV1) for handler in callback_manager.handlers ): if tracer: callback_manager.add_handler(tracer, True) else: handler = LangChainTracerV1() handler.load_session(tracer_session) callback_manager.add_handler(handler, True) if tracing_v2_enabled_ and not any( isinstance(handler, LangChainTracer) for handler in callback_manager.handlers ): if tracer_v2: callback_manager.add_handler(tracer_v2, True) else: try: handler = LangChainTracer(session_name=tracer_session) handler.ensure_session() callback_manager.add_handler(handler, True) except Exception as e: logger.debug("Unable to load requested LangChainTracer", e) if open_ai is not None and not any( isinstance(handler, OpenAICallbackHandler) for handler in callback_manager.handlers ): callback_manager.add_handler(open_ai, True) return callback_manager
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/retrievers/weaviate_hybrid_search.py
"""Wrapper around weaviate vector database.""" from __future__ import annotations from typing import Any, Dict, List, Optional from uuid import uuid4 from pydantic import Extra from langchain.docstore.document import Document from langchain.schema import BaseRetriever class WeaviateHybridSearchRetriever(BaseRetriever):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/retrievers/weaviate_hybrid_search.py
def __init__( self, client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, ): try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self.k = k self.alpha = alpha self._index_name = index_name self._text_key = text_key self._query_attrs = [self._text_key] if attributes is not None: self._query_attrs.extend(attributes) class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/retrievers/weaviate_hybrid_search.py
"""Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True def add_documents(self, docs: List[Document]) -> List[str]: """Upload documents to Weaviate.""" from weaviate.util import get_valid_uuid with self._client.batch as batch: ids = [] for i, doc in enumerate(docs): metadata = doc.metadata or {} data_properties = {self._text_key: doc.page_content, **metadata} _id = get_valid_uuid(uuid4()) batch.add_data_object(data_properties, self._index_name, _id) ids.append(_id) return ids def get_relevant_documents(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/retrievers/weaviate_hybrid_search.py
self, query: str, where_filter: Optional[Dict[str, object]] = None ) -> List[Document]: """Look up similar documents in Weaviate.""" query_obj = self._client.query.get(self._index_name, self._query_attrs) if where_filter: query_obj = query_obj.with_where(where_filter) result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs async def aget_relevant_documents( self, query: str, where_filter: Optional[Dict[str, object]] = None ) -> List[Document]: raise NotImplementedError
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
"""Wrapper around Redis vector database.""" from __future__ import annotations import json import logging import uuid from typing import ( TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Literal, Mapping, Optional, Tuple, Type, ) import numpy as np from pydantic import BaseModel, root_validator from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore, VectorStoreRetriever logger = logging.getLogger(__name__) if TYPE_CHECKING: from redis.client import Redis as RedisType from redis.commands.search.query import Query REDIS_REQUIRED_MODULES = [ {"name": "search", "ver": 20400}, {"name": "searchlight", "ver": 20400}, ] REDIS_DISTANCE_METRICS = Literal["COSINE", "IP", "L2"] def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None: """Check if the correct Redis modules are installed.""" installed_modules = client.module_list() installed_modules = { module[b"name"].decode("utf-8"): module for module in installed_modules } for module in required_modules: if module["name"] in installed_modules and int( installed_modules[module["name"]][b"ver"] ) >= int(module["ver"]): return error_message = ( "You must add the RediSearch (>= 2.4) module from Redis Stack. " "Please refer to Redis Stack docs: https://redis.io/docs/stack/" ) logging.error(error_message) raise ValueError(error_message) def _check_index_exists(client: RedisType, index_name: str) -> bool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
"""Check if Redis index exists.""" try: client.ft(index_name).info() except: logger.info("Index does not exist") return False logger.info("Index already exists") return True def _redis_key(prefix: str) -> str: """Redis key schema for a given prefix.""" return f"{prefix}:{uuid.uuid4().hex}" def _redis_prefix(index_name: str) -> str: """Redis key prefix for a given index.""" return f"doc:{index_name}" def _default_relevance_score(val: float) -> float: return 1 - val class Redis(VectorStore):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
"""Wrapper around Redis vector database. To use, you should have the ``redis`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Redis( redis_url="redis://username:password@localhost:6379" index_name="my-index", embedding_function=embeddings.embed_query, ) """ def __init__( self, redis_url: str, index_name: str, embedding_function: Callable, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", relevance_score_fn: Optional[
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
Callable[[float], float] ] = _default_relevance_score, **kwargs: Any, ): """Initialize with necessary components.""" try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) self.embedding_function = embedding_function self.index_name = index_name try: redis_client = redis.from_url(redis_url, **kwargs) _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES) except ValueError as e: raise ValueError(f"Redis failed to connect: {e}") self.client = redis_client self.content_key = content_key self.metadata_key = metadata_key self.vector_key = vector_key self.relevance_score_fn = relevance_score_fn def _create_index( self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE" ) -> None: try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
from redis.commands.search.field import TextField, VectorField from redis.commands.search.indexDefinition import IndexDefinition, IndexType except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) if not _check_index_exists(self.client, self.index_name): schema = ( TextField(name=self.content_key), TextField(name=self.metadata_key), VectorField( self.vector_key, "FLAT", { "TYPE": "FLOAT32", "DIM": dim, "DISTANCE_METRIC": distance_metric, }, ), ) prefix = _redis_prefix(self.index_name) self.client.ft(self.index_name).create_index( fields=schema, definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any, ) -> List[str]: """Add more texts to the vectorstore. Args: texts (Iterable[str]): Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional): Optional pre-generated embeddings. Defaults to None. keys (Optional[List[str]], optional): Optional key values to use as ids. Defaults to None. batch_size (int, optional): Batch size to use for writes. Defaults to 1000. Returns: List[str]: List of ids added to the vectorstore """ ids = [] prefix = _redis_prefix(self.index_name) pipeline = self.client.pipeline(transaction=False) for i, text in enumerate(texts): key = keys[i] if keys else _redis_key(prefix) metadata = metadatas[i] if metadatas else {}
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
embedding = embeddings[i] if embeddings else self.embedding_function(text) pipeline.hset( key, mapping={ self.content_key: text, self.vector_key: np.array(embedding, dtype=np.float32).tobytes(), self.metadata_key: json.dumps(metadata), }, ) ids.append(key) if i % batch_size == 0: pipeline.execute() pipeline.execute() return ids def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """ Returns the most similar indexed documents to the query text. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. Returns: List[Document]: A list of documents that are most similar to the query text. """ docs_and_scores = self.similarity_search_with_score(query, k=k) return [doc for doc, _ in docs_and_scores] def similarity_search_limit_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any ) -> List[Document]: """ Returns the most similar indexed documents to the query text within the score_threshold range. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. score_threshold (float): The minimum matching score required for a document to be considered a match. Defaults to 0.2. Because the similarity calculation algorithm is based on cosine similarity, the smaller the angle, the higher the similarity. Returns: List[Document]: A list of documents that are most similar to the query text, including the match score for each document. Note: If there are no documents that satisfy the score_threshold value, an empty list is returned. """ docs_and_scores = self.similarity_search_with_score(query, k=k) return [doc for doc, score in docs_and_scores if score < score_threshold] def _prepare_query(self, k: int) -> Query:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
try: from redis.commands.search.query import Query except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) hybrid_fields = "*" base_query = ( f"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]" ) return_fields = [self.metadata_key, self.content_key, "vector_score"] return ( Query(base_query) .return_fields(*return_fields) .sort_by("vector_score") .paging(0, k) .dialect(2) ) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
self, query: str, k: int = 4 ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query and score for each """ embedding = self.embedding_function(query) redis_query = self._prepare_query(k) params_dict: Mapping[str, str] = { "vector": np.array(embedding) .astype(dtype=np.float32) .tobytes() }
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
results = self.client.ft(self.index_name).search(redis_query, params_dict) docs = [ ( Document( page_content=result.content, metadata=json.loads(result.metadata) ), float(result.vector_score), ) for result in results.docs ] return docs def _similarity_search_with_relevance_scores( self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ if self.relevance_score_fn is None: raise ValueError( "relevance_score_fn must be provided to" " Weaviate constructor to normalize scores" ) docs_and_scores = self.similarity_search_with_score(query, k=k) return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores] @classmethod def from_texts_return_keys(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", distance_metric: REDIS_DISTANCE_METRICS = "COSINE", **kwargs: Any, ) -> Tuple[Redis, List[str]]: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts(