status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") if not index_name: index_name = uuid.uuid4().hex instance = cls( redis_url=redis_url, index_name=index_name, embedding_function=embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) embeddings = embedding.embed_documents(texts) instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric) keys = instance.add_texts(texts, metadatas, embeddings) return instance, keys @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from raw documents.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in Redis. 3. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts( texts, embeddings, redis_url="redis://username:password@localhost:6379" ) """ instance, _ = cls.from_texts_return_keys( cls=cls, texts=texts, embedding=embedding, metadatas=metadatas, index_name=index_name, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, kwargs=kwargs, ) return instance @staticmethod def drop_index(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
index_name: str, delete_documents: bool, **kwargs: Any, ) -> bool: """ Drop a Redis search index. Args: index_name (str): Name of the index to drop. delete_documents (bool): Whether to drop the associated documents. Returns: bool: Whether or not the drop was successful. """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url") client = redis.from_url(url=redis_url, **kwargs) except ValueError as e: raise ValueError(f"Your redis connected error: {e}") try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
client.ft(index_name).dropindex(delete_documents) logger.info("Drop index") return True except: return False @classmethod def from_existing_index( cls, embedding: Embeddings, index_name: str, content_key: str = "content", metadata_key: str = "metadata", vector_key: str = "content_vector", **kwargs: Any, ) -> Redis: """Connect to an existing Redis index.""" redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
client = redis.from_url(url=redis_url, **kwargs) _check_redis_module_exist(client, REDIS_REQUIRED_MODULES) assert _check_index_exists( client, index_name ), f"Index {index_name} does not exist" except Exception as e: raise ValueError(f"Redis failed to connect: {e}") return cls( redis_url, index_name, embedding.embed_query, content_key=content_key, metadata_key=metadata_key, vector_key=vector_key, **kwargs, ) def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever: return RedisVectorStoreRetriever(vectorstore=self, **kwargs) class RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel): vectorstore: Redis search_type: str = "similarity" k: int = 4 score_threshold: float = 0.4 class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @root_validator() def validate_search_type(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/redis.py
"""Validate search type.""" if "search_type" in values: search_type = values["search_type"] if search_type not in ("similarity", "similarity_limit"): raise ValueError(f"search_type of {search_type} not allowed.") return values def get_relevant_documents(self, query: str) -> List[Document]: if self.search_type == "similarity": docs = self.vectorstore.similarity_search(query, k=self.k) elif self.search_type == "similarity_limit": docs = self.vectorstore.similarity_search_limit_score( query, k=self.k, score_threshold=self.score_threshold ) else: raise ValueError(f"search_type of {self.search_type} not allowed.") return docs async def aget_relevant_documents(self, query: str) -> List[Document]: raise NotImplementedError("RedisVectorStoreRetriever does not support async") def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" return self.vectorstore.add_documents(documents, **kwargs) async def aadd_documents( self, documents: List[Document], **kwargs: Any ) -> List[str]: """Add documents to vectorstore.""" return await self.vectorstore.aadd_documents(documents, **kwargs)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
"""Wrapper around weaviate vector database.""" from __future__ import annotations import datetime from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type from uuid import uuid4 import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance def _default_schema(index_name: str) -> Dict: return { "class": index_name, "properties": [ { "name": "text", "dataType": ["text"], } ], } def _create_weaviate_client(**kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
client = kwargs.get("client") if client is not None: return client weaviate_url = get_from_dict_or_env(kwargs, "weaviate_url", "WEAVIATE_URL") try: weaviate_api_key = get_from_dict_or_env( kwargs, "weaviate_api_key", "WEAVIATE_API_KEY", None ) except ValueError: weaviate_api_key = None try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip instal weaviate-client`" ) auth = ( weaviate.auth.AuthApiKey(api_key=weaviate_api_key) if weaviate_api_key is not None else None ) client = weaviate.Client(weaviate_url, auth_client_secret=auth) return client def _default_score_normalizer(val: float) -> float:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
return 1 - 1 / (1 + np.exp(val)) class Weaviate(VectorStore): """Wrapper around Weaviate vector database. To use, you should have the ``weaviate-client`` python package installed. Example: .. code-block:: python import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) """ def __init__( self, client: Any, index_name: str, text_key: str, embedding: Optional[Embeddings] = None, attributes: Optional[List[str]] = None, relevance_score_fn: Optional[ Callable[[float], float] ] = _default_score_normalizer,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
): """Initialize with Weaviate client.""" try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self._index_name = index_name self._embedding = embedding self._text_key = text_key self._query_attrs = [self._text_key] self._relevance_score_fn = relevance_score_fn if attributes is not None: self._query_attrs.extend(attributes) def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Upload texts with metadata (properties) to Weaviate.""" from weaviate.util import get_valid_uuid def json_serializable(value: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
if isinstance(value, datetime.datetime): return value.isoformat() return value with self._client.batch as batch: ids = [] for i, doc in enumerate(texts): data_properties = { self._text_key: doc, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = json_serializable(metadatas[i][key]) _id = get_valid_uuid(uuid4()) if self._embedding is not None: embeddings = self._embedding.embed_documents(list(doc)) batch.add_data_object( data_object=data_properties, class_name=self._index_name, uuid=_id, vector=embeddings[0], ) else: batch.add_data_object( data_object=data_properties, class_name=self._index_name, uuid=_id, ) ids.append(_id) return ids def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) result = query_obj.with_near_text(content).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
self, embedding: List[float], k: int = 4, **kwargs: Any ) -> List[Document]: """Look up similar documents by embedding vector in Weaviate.""" vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) result = query_obj.with_near_vector(vector).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding is not None: embedding = self._embedding.embed_query(query) else: raise ValueError( "max_marginal_relevance_search requires a suitable Embeddings object" ) return self.max_marginal_relevance_search_by_vector( embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs ) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) results = ( query_obj.with_additional("vector") .with_near_vector(vector) .with_limit(fetch_k) .do() ) payload = results["data"]["Get"][self._index_name] embeddings = [result["_additional"]["vector"] for result in payload] mmr_selected = maximal_marginal_relevance( np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult ) docs = [] for idx in mmr_selected: text = payload[idx].pop(self._text_key) payload[idx].pop("_additional") meta = payload[idx]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
docs.append(Document(page_content=text, metadata=meta)) return docs def similarity_search_with_score( self, query: str, k: int = 4, **kwargs: Any ) -> List[Tuple[Document, float]]: content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) result = ( query_obj.with_near_text(content) .with_limit(k) .with_additional("vector") .do() ) if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs_and_scores = [] if self._embedding is None: raise ValueError( "_embedding cannot be None for similarity_search_with_score" ) for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) score = np.dot( res["_additional"]["vector"], self._embedding.embed_query(query) ) docs_and_scores.append((Document(page_content=text, metadata=res), score)) return docs_and_scores def _similarity_search_with_relevance_scores(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ if self._relevance_score_fn is None: raise ValueError( "relevance_score_fn must be provided to" " Weaviate constructor to normalize scores" ) docs_and_scores = self.similarity_search_with_score(query, k=k) return [ (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores ] @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
cls: Type[Weaviate], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> Weaviate: """Construct Weaviate wrapper from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in the Weaviate instance. 3. Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings, weaviate_url="http://localhost:8080" ) """ client = _create_weaviate_client(**kwargs) from weaviate.util import get_valid_uuid index_name = kwargs.get("index_name", f"LangChain_{uuid4().hex}") embeddings = embedding.embed_documents(texts) if embedding else None
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
langchain/vectorstores/weaviate.py
text_key = "text" schema = _default_schema(index_name) attributes = list(metadatas[0].keys()) if metadatas else None if not client.schema.contains(schema): client.schema.create_class(schema) with client.batch as batch: for i, text in enumerate(texts): data_properties = { text_key: text, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = metadatas[i][key] _id = get_valid_uuid(uuid4()) params = { "uuid": _id, "data_object": data_properties, "class_name": index_name, } if embeddings is not None: params["vector"] = embeddings[i] batch.add_data_object(**params) batch.flush() return cls(client, index_name, text_key, embedding, attributes)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/retrievers/test_weaviate_hybrid_search.py
"""Test Weaviate functionality.""" import logging import os from typing import Generator, Union from uuid import uuid4 import pytest from weaviate import Client from langchain.docstore.document import Document from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever logging.basicConfig(level=logging.DEBUG) """ cd tests/integration_tests/vectorstores/docker-compose docker compose -f weaviate.yml up """ class TestWeaviateHybridSearchRetriever: @classmethod def setup_class(cls) -> None: if not os.getenv("OPENAI_API_KEY"): raise ValueError("OPENAI_API_KEY environment variable is not set") @pytest.fixture(scope="class", autouse=True) def weaviate_url(self) -> Union[str, Generator[str, None, None]]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/retrievers/test_weaviate_hybrid_search.py
"""Return the weaviate url.""" url = "http://localhost:8080" yield url client = Client(url) client.schema.delete_all() @pytest.mark.vcr(ignore_localhost=True) def test_get_relevant_documents(self, weaviate_url: str) -> None: """Test end to end construction and MRR search.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] client = Client(weaviate_url) retriever = WeaviateHybridSearchRetriever( client=client, index_name=f"LangChain_{uuid4().hex}", text_key="text", attributes=["page"], ) for i, text in enumerate(texts): retriever.add_documents( [Document(page_content=text, metadata=metadatas[i])] ) output = retriever.get_relevant_documents("foo") assert output == [ Document(page_content="foo", metadata={"page": 0}), Document(page_content="baz", metadata={"page": 2}), Document(page_content="bar", metadata={"page": 1}), ] @pytest.mark.vcr(ignore_localhost=True) def test_get_relevant_documents_with_filter(self, weaviate_url: str) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/retrievers/test_weaviate_hybrid_search.py
"""Test end to end construction and MRR search.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] client = Client(weaviate_url) retriever = WeaviateHybridSearchRetriever( client=client, index_name=f"LangChain_{uuid4().hex}", text_key="text", attributes=["page"], ) for i, text in enumerate(texts): retriever.add_documents( [Document(page_content=text, metadata=metadatas[i])] ) where_filter = {"path": ["page"], "operator": "Equal", "valueNumber": 0} output = retriever.get_relevant_documents("foo", where_filter=where_filter) assert output == [ Document(page_content="foo", metadata={"page": 0}), ]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
"""Test Weaviate functionality.""" import logging import os from typing import Generator, Union import pytest from weaviate import Client from langchain.docstore.document import Document from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores.weaviate import Weaviate from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings logging.basicConfig(level=logging.DEBUG) """ cd tests/integration_tests/vectorstores/docker-compose docker compose -f weaviate.yml up """ class TestWeaviate: @classmethod def setup_class(cls) -> None: if not os.getenv("OPENAI_API_KEY"): raise ValueError("OPENAI_API_KEY environment variable is not set") @pytest.fixture(scope="class", autouse=True) def weaviate_url(self) -> Union[str, Generator[str, None, None]]: """Return the weaviate url.""" url = "http://localhost:8080" yield url client = Client(url) client.schema.delete_all() @pytest.mark.vcr(ignore_localhost=True) def test_similarity_search_without_metadata(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and search without metadata.""" texts = ["foo", "bar", "baz"] docsearch = Weaviate.from_texts( texts, embedding_openai, weaviate_url=weaviate_url, ) output = docsearch.similarity_search("foo", k=1) assert output == [Document(page_content="foo")] @pytest.mark.vcr(ignore_localhost=True) def test_similarity_search_with_metadata( self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and search with metadata.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] docsearch = Weaviate.from_texts( texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url ) output = docsearch.similarity_search("foo", k=1) assert output == [Document(page_content="foo", metadata={"page": 0})] @pytest.mark.vcr(ignore_localhost=True) def test_similarity_search_with_metadata_and_filter(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and search with metadata.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] docsearch = Weaviate.from_texts( texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url ) output = docsearch.similarity_search( "foo", k=2, where_filter={"path": ["page"], "operator": "Equal", "valueNumber": 0}, ) assert output == [Document(page_content="foo", metadata={"page": 0})] @pytest.mark.vcr(ignore_localhost=True) def test_max_marginal_relevance_search( self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and MRR search.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] docsearch = Weaviate.from_texts( texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url ) standard_ranking = docsearch.similarity_search("foo", k=2) output = docsearch.max_marginal_relevance_search( "foo", k=2, fetch_k=3, lambda_mult=1.0 ) assert output == standard_ranking
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
output = docsearch.max_marginal_relevance_search( "foo", k=2, fetch_k=3, lambda_mult=0.0 ) assert output == [ Document(page_content="foo", metadata={"page": 0}), Document(page_content="bar", metadata={"page": 1}), ] @pytest.mark.vcr(ignore_localhost=True) def test_max_marginal_relevance_search_by_vector( self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and MRR search by vector.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] docsearch = Weaviate.from_texts( texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url ) foo_embedding = embedding_openai.embed_query("foo") standard_ranking = docsearch.similarity_search("foo", k=2) output = docsearch.max_marginal_relevance_search_by_vector( foo_embedding, k=2, fetch_k=3, lambda_mult=1.0 ) assert output == standard_ranking output = docsearch.max_marginal_relevance_search_by_vector( foo_embedding, k=2, fetch_k=3, lambda_mult=0.0 ) assert output == [
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
Document(page_content="foo", metadata={"page": 0}), Document(page_content="bar", metadata={"page": 1}), ] @pytest.mark.vcr(ignore_localhost=True) def test_max_marginal_relevance_search_with_filter( self, weaviate_url: str, embedding_openai: OpenAIEmbeddings ) -> None: """Test end to end construction and MRR search.""" texts = ["foo", "bar", "baz"] metadatas = [{"page": i} for i in range(len(texts))] docsearch = Weaviate.from_texts( texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url ) where_filter = {"path": ["page"], "operator": "Equal", "valueNumber": 0} standard_ranking = docsearch.similarity_search( "foo", k=2, where_filter=where_filter ) output = docsearch.max_marginal_relevance_search( "foo", k=2, fetch_k=3, lambda_mult=1.0, where_filter=where_filter ) assert output == standard_ranking output = docsearch.max_marginal_relevance_search( "foo", k=2, fetch_k=3, lambda_mult=0.0, where_filter=where_filter ) assert output == [ Document(page_content="foo", metadata={"page": 0}), ] def test_add_texts_with_given_embedding(self, weaviate_url: str) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,791
Accept UUID list as an argument to add texts and documents into Weaviate vectorstore
### Feature request When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137 However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`. Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods. ### Motivation Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client. The document states as below: > Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object. This behavior is extremely useful when you need to update and delete document from a known field of the document. First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below: https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm. https://docs.python.org/2/library/uuid.html Let's say you have unique identifier of the document, and use it to generate your own UUID. This way you can directly update, delete or replace documents without searching the documents by metadata. This will saves your time, your code, and network bandwidth and computer resources. ### Your contribution I'm attempting to make a PR,
https://github.com/langchain-ai/langchain/issues/4791
https://github.com/langchain-ai/langchain/pull/4800
e78c9be312e5c59ec96f22d6e531c28329ca6312
6561efebb7c1cbd3716f5e7f03f18ad9b3b1afa5
"2023-05-16T15:31:48Z"
python
"2023-05-16T22:26:46Z"
tests/integration_tests/vectorstores/test_weaviate.py
texts = ["foo", "bar", "baz"] embedding = FakeEmbeddings() docsearch = Weaviate.from_texts( texts, embedding=embedding, weaviate_url=weaviate_url ) docsearch.add_texts(["foo"]) output = docsearch.similarity_search_by_vector( embedding.embed_query("foo"), k=2 ) assert output == [ Document(page_content="foo"), Document(page_content="foo"), ]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
"""Wrapper around OpenAI embedding models.""" from __future__ import annotations import logging from typing import ( Any, Callable, Dict, List, Literal, Optional, Set, Tuple, Union, ) import numpy as np from pydantic import BaseModel, Extra, root_validator from tenacity import ( before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, wait_exponential, ) from langchain.embeddings.base import Embeddings from langchain.utils import get_from_dict_or_env logger = logging.getLogger(__name__) def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
import openai min_seconds = 4 max_seconds = 10 return retry( reraise=True, stop=stop_after_attempt(embeddings.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any: """Use tenacity to retry the embedding call.""" retry_decorator = _create_retry_decorator(embeddings) @retry_decorator def _embed_with_retry(**kwargs: Any) -> Any: return embeddings.client.create(**kwargs) return _embed_with_retry(**kwargs) class OpenAIEmbeddings(BaseModel, Embeddings):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
"""Wrapper around OpenAI embedding models. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key or pass it as a named parameter to the constructor. Example: .. code-block:: python from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example: .. code-block:: python import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" from langchain.embeddings.openai import OpenAIEmbeddings
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", api_base="https://your-endpoint.openai.azure.com/", api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) """ client: Any model: str = "text-embedding-ada-002" deployment: str = model openai_api_version: Optional[str] = None openai_api_base: Optional[str] = None openai_api_type: Optional[str] = None embedding_ctx_length: int = 8191 openai_api_key: Optional[str] = None openai_organization: Optional[str] = None allowed_special: Union[Literal["all"], Set[str]] = set() disallowed_special: Union[Literal["all"], Set[str], Tuple[()]] = "all" chunk_size: int = 1000 """Maximum number of texts to embed in each batch""" max_retries: int = 6 """Maximum number of retries to make when generating.""" request_timeout: Optional[Union[float, Tuple[float, float]]] = None """Timeout in seconds for the OpenAPI request.""" headers: Any = None class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" openai_api_key = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) openai_api_base = get_from_dict_or_env( values, "openai_api_base", "OPENAI_API_BASE", default="", ) openai_api_type = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", default="", ) if openai_api_type in ("azure", "azure_ad", "azuread"): default_api_version = "2022-12-01" else: default_api_version = "" openai_api_version = get_from_dict_or_env( values, "openai_api_version",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
"OPENAI_API_VERSION", default=default_api_version, ) openai_organization = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION", default="", ) try: import openai openai.api_key = openai_api_key if openai_organization: openai.organization = openai_organization if openai_api_base: openai.api_base = openai_api_base if openai_api_type: openai.api_version = openai_api_version if openai_api_type: openai.api_type = openai_api_type values["client"] = openai.Embedding except ImportError: raise ValueError( "Could not import openai python package. " "Please install it with `pip install openai`." ) return values def _get_len_safe_embeddings(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None ) -> List[List[float]]: embeddings: List[List[float]] = [[] for _ in range(len(texts))] try: import tiktoken tokens = [] indices = [] encoding = tiktoken.model.encoding_for_model(self.model) for i, text in enumerate(texts): if self.model.endswith("001"): text = text.replace("\n", " ") token = encoding.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, ) for j in range(0, len(token), self.embedding_ctx_length): tokens += [token[j : j + self.embedding_ctx_length]] indices += [i] batched_embeddings = [] _chunk_size = chunk_size or self.chunk_size for i in range(0, len(tokens), _chunk_size): response = embed_with_retry( self, input=tokens[i : i + _chunk_size], engine=self.deployment, request_timeout=self.request_timeout, headers=self.headers,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
) batched_embeddings += [r["embedding"] for r in response["data"]] results: List[List[List[float]]] = [[] for _ in range(len(texts))] num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))] for i in range(len(indices)): results[indices[i]].append(batched_embeddings[i]) num_tokens_in_batch[indices[i]].append(len(tokens[i])) for i in range(len(texts)): _result = results[i] if len(_result) == 0: average = embed_with_retry( self, input="", engine=self.deployment, request_timeout=self.request_timeout, headers=self.headers, )["data"][0]["embedding"] else: average = np.average( _result, axis=0, weights=num_tokens_in_batch[i] ) embeddings[i] = (average / np.linalg.norm(average)).tolist() return embeddings except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to for OpenAIEmbeddings. " "Please install it with `pip install tiktoken`." ) def _embedding_func(self, text: str, *, engine: str) -> List[float]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
"""Call out to OpenAI's embedding endpoint.""" if len(text) > self.embedding_ctx_length: return self._get_len_safe_embeddings([text], engine=engine)[0] else: if self.model.endswith("001"): text = text.replace("\n", " ") return embed_with_retry( self, input=[text], engine=engine, request_timeout=self.request_timeout, headers=self.headers, )["data"][0]["embedding"] def embed_documents(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,498
Cannot subclass OpenAIEmbeddings
### System Info - langchain: 0.0.163 - python: 3.9.16 - OS: Ubuntu 22.04 ### Who can help? @shibanovp @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: Getting error when running this code snippet: ```python from langchain.embeddings import OpenAIEmbeddings class AzureOpenAIEmbeddings(OpenAIEmbeddings): pass ``` Error: ``` Traceback (most recent call last): File "test.py", line 3, in <module> class AzureOpenAIEmbeddings(OpenAIEmbeddings): File "pydantic/main.py", line 139, in pydantic.main.ModelMetaclass.__new__ File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/copy.py", line 263, in <genexpr> args = (deepcopy(arg, memo) for arg in args) File "../lib/python3.9/copy.py", line 146, in deepcopy y = copier(x, memo) File "../lib/python3.9/copy.py", line 210, in _deepcopy_tuple y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 210, in <listcomp> y = [deepcopy(a, memo) for a in x] File "../lib/python3.9/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "../lib/python3.9/copy.py", line 264, in _reconstruct y = func(*args) File "../lib/python3.9/typing.py", line 277, in inner return func(*args, **kwds) File "../lib/python3.9/typing.py", line 920, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 920, in <genexpr> params = tuple(_type_check(p, msg) for p in params) File "../lib/python3.9/typing.py", line 166, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got (). ``` ### Expected behavior Expect to allow subclass as normal.
https://github.com/langchain-ai/langchain/issues/4498
https://github.com/langchain-ai/langchain/pull/4500
08df80bed6e36150ea7c17fa15094a38d3ec546f
49e4aaf67326b3185405bdefb36efe79e4705a59
"2023-05-11T04:42:23Z"
python
"2023-05-17T01:35:19Z"
langchain/embeddings/openai.py
self, texts: List[str], chunk_size: Optional[int] = 0 ) -> List[List[float]]: """Call out to OpenAI's embedding endpoint for embedding search docs. Args: texts: The list of texts to embed. chunk_size: The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns: List of embeddings, one for each text. """ return self._get_len_safe_embeddings(texts, engine=self.deployment) def embed_query(self, text: str) -> List[float]: """Call out to OpenAI's embedding endpoint for embedding query text. Args: text: The text to embed. Returns: Embedding for the text. """ embedding = self._embedding_func(text, engine=self.deployment) return embedding
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
"""Wrapper around weaviate vector database.""" from __future__ import annotations import datetime from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type from uuid import uuid4 import numpy as np from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import get_from_dict_or_env from langchain.vectorstores.base import VectorStore from langchain.vectorstores.utils import maximal_marginal_relevance def _default_schema(index_name: str) -> Dict: return { "class": index_name, "properties": [ { "name": "text", "dataType": ["text"], } ], } def _create_weaviate_client(**kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
client = kwargs.get("client") if client is not None: return client weaviate_url = get_from_dict_or_env(kwargs, "weaviate_url", "WEAVIATE_URL") try: weaviate_api_key = get_from_dict_or_env( kwargs, "weaviate_api_key", "WEAVIATE_API_KEY", None ) except ValueError: weaviate_api_key = None try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip instal weaviate-client`" ) auth = ( weaviate.auth.AuthApiKey(api_key=weaviate_api_key) if weaviate_api_key is not None else None ) client = weaviate.Client(weaviate_url, auth_client_secret=auth) return client def _default_score_normalizer(val: float) -> float:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
return 1 - 1 / (1 + np.exp(val)) class Weaviate(VectorStore): """Wrapper around Weaviate vector database. To use, you should have the ``weaviate-client`` python package installed. Example: .. code-block:: python import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) """ def __init__( self, client: Any, index_name: str, text_key: str, embedding: Optional[Embeddings] = None, attributes: Optional[List[str]] = None, relevance_score_fn: Optional[ Callable[[float], float] ] = _default_score_normalizer,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
): """Initialize with Weaviate client.""" try: import weaviate except ImportError: raise ValueError( "Could not import weaviate python package. " "Please install it with `pip install weaviate-client`." ) if not isinstance(client, weaviate.Client): raise ValueError( f"client should be an instance of weaviate.Client, got {type(client)}" ) self._client = client self._index_name = index_name self._embedding = embedding self._text_key = text_key self._query_attrs = [self._text_key] self._relevance_score_fn = relevance_score_fn if attributes is not None: self._query_attrs.extend(attributes) def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> List[str]: """Upload texts with metadata (properties) to Weaviate.""" from weaviate.util import get_valid_uuid def json_serializable(value: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
if isinstance(value, datetime.datetime): return value.isoformat() return value with self._client.batch as batch: ids = [] for i, doc in enumerate(texts): data_properties = { self._text_key: doc, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = json_serializable(metadatas[i][key]) if "uuids" in kwargs: _id = kwargs["uuids"][i] else: _id = get_valid_uuid(uuid4()) if self._embedding is not None: embeddings = self._embedding.embed_documents(list(doc)) batch.add_data_object( data_object=data_properties, class_name=self._index_name, uuid=_id, vector=embeddings[0], ) else: batch.add_data_object( data_object=data_properties,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
class_name=self._index_name, uuid=_id, ) ids.append(_id) return ids def similarity_search( self, query: str, k: int = 4, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) result = query_obj.with_near_text(content).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
self, embedding: List[float], k: int = 4, **kwargs: Any ) -> List[Document]: """Look up similar documents by embedding vector in Weaviate.""" vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) result = query_obj.with_near_vector(vector).with_limit(k).do() if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs = [] for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) docs.append(Document(page_content=text, metadata=res)) return docs def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if self._embedding is not None: embedding = self._embedding.embed_query(query) else: raise ValueError( "max_marginal_relevance_search requires a suitable Embeddings object" ) return self.max_marginal_relevance_search_by_vector( embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs ) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ vector = {"vector": embedding} query_obj = self._client.query.get(self._index_name, self._query_attrs) if kwargs.get("where_filter"): query_obj = query_obj.with_where(kwargs.get("where_filter")) results = ( query_obj.with_additional("vector") .with_near_vector(vector) .with_limit(fetch_k) .do() ) payload = results["data"]["Get"][self._index_name] embeddings = [result["_additional"]["vector"] for result in payload] mmr_selected = maximal_marginal_relevance( np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult ) docs = [] for idx in mmr_selected: text = payload[idx].pop(self._text_key) payload[idx].pop("_additional") meta = payload[idx]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
docs.append(Document(page_content=text, metadata=meta)) return docs def similarity_search_with_score( self, query: str, k: int = 4, **kwargs: Any ) -> List[Tuple[Document, float]]: content: Dict[str, Any] = {"concepts": [query]} if kwargs.get("search_distance"): content["certainty"] = kwargs.get("search_distance") query_obj = self._client.query.get(self._index_name, self._query_attrs) result = ( query_obj.with_near_text(content) .with_limit(k) .with_additional("vector") .do() ) if "errors" in result: raise ValueError(f"Error during query: {result['errors']}") docs_and_scores = [] if self._embedding is None: raise ValueError( "_embedding cannot be None for similarity_search_with_score" ) for res in result["data"]["Get"][self._index_name]: text = res.pop(self._text_key) score = np.dot( res["_additional"]["vector"], self._embedding.embed_query(query) ) docs_and_scores.append((Document(page_content=text, metadata=res), score)) return docs_and_scores def _similarity_search_with_relevance_scores(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ if self._relevance_score_fn is None: raise ValueError( "relevance_score_fn must be provided to" " Weaviate constructor to normalize scores" ) docs_and_scores = self.similarity_search_with_score(query, k=k) return [ (doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores ] @classmethod def from_texts( cls: Type[Weaviate], texts: List[str],
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
embedding: Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any, ) -> Weaviate: """Construct Weaviate wrapper from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in the Weaviate instance. 3. Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings, weaviate_url="http://localhost:8080" ) """ client = _create_weaviate_client(**kwargs) from weaviate.util import get_valid_uuid index_name = kwargs.get("index_name", f"LangChain_{uuid4().hex}") embeddings = embedding.embed_documents(texts) if embedding else None text_key = "text" schema = _default_schema(index_name) attributes = list(metadatas[0].keys()) if metadatas else None if not client.schema.contains(schema):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,742
Issue: Weaviate: why similarity_search uses with_near_text?
### Issue you'd like to raise. [similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text. This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules). At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters. Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search? ### Suggestion: If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled.
https://github.com/langchain-ai/langchain/issues/4742
https://github.com/langchain-ai/langchain/pull/4824
d1b6839d97ea1b0c60f226633da34d97a130c183
0a591da6db5c76722e349e03692d674e45ba626a
"2023-05-15T18:37:07Z"
python
"2023-05-17T02:43:15Z"
langchain/vectorstores/weaviate.py
client.schema.create_class(schema) with client.batch as batch: for i, text in enumerate(texts): data_properties = { text_key: text, } if metadatas is not None: for key in metadatas[i].keys(): data_properties[key] = metadatas[i][key] if "uuids" in kwargs: _id = kwargs["uuids"][i] else: _id = get_valid_uuid(uuid4()) params = { "uuid": _id, "data_object": data_properties, "class_name": index_name, } if embeddings is not None: params["vector"] = embeddings[i] batch.add_data_object(**params) batch.flush() return cls(client, index_name, text_key, embedding, attributes)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Loader that loads data from Google Drive.""" from pathlib import Path from typing import Any, Dict, List, Optional, Union from pydantic import BaseModel, root_validator, validator from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader SCOPES = ["https://www.googleapis.com/auth/drive.readonly"] class GoogleDriveLoader(BaseLoader, BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Loader that loads Google Docs from Google Drive.""" service_account_key: Path = Path.home() / ".credentials" / "keys.json" credentials_path: Path = Path.home() / ".credentials" / "credentials.json" token_path: Path = Path.home() / ".credentials" / "token.json" folder_id: Optional[str] = None document_ids: Optional[List[str]] = None file_ids: Optional[List[str]] = None recursive: bool = False @root_validator def validate_folder_id_or_document_ids( cls, values: Dict[str, Any] ) -> Dict[str, Any]: """Validate that either folder_id or document_ids is set, but not both.""" if values.get("folder_id") and ( values.get("document_ids") or values.get("file_ids") ): raise ValueError( "Cannot specify both folder_id and document_ids nor " "folder_id and file_ids" ) if ( not values.get("folder_id") and not values.get("document_ids") and not values.get("file_ids") ): raise ValueError("Must specify either folder_id, document_ids, or file_ids") return values @validator("credentials_path") def validate_credentials_path(cls, v: Any, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Validate that credentials_path exists.""" if not v.exists(): raise ValueError(f"credentials_path {v} does not exist") return v def _load_credentials(self) -> Any: """Load credentials.""" try: from google.auth.transport.requests import Request from google.oauth2 import service_account from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow except ImportError:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
raise ImportError( "You must run " "`pip install --upgrade " "google-api-python-client google-auth-httplib2 " "google-auth-oauthlib` " "to use the Google Drive loader." ) creds = None if self.service_account_key.exists(): return service_account.Credentials.from_service_account_file( str(self.service_account_key), scopes=SCOPES ) if self.token_path.exists(): creds = Credentials.from_authorized_user_file(str(self.token_path), SCOPES) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( str(self.credentials_path), SCOPES ) creds = flow.run_local_server(port=0) with open(self.token_path, "w") as token: token.write(creds.to_json()) return creds def _load_sheet_from_id(self, id: str) -> List[Document]: """Load a sheet and all tabs from an ID.""" from googleapiclient.discovery import build creds = self._load_credentials() sheets_service = build("sheets", "v4", credentials=creds)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
spreadsheet = sheets_service.spreadsheets().get(spreadsheetId=id).execute() sheets = spreadsheet.get("sheets", []) documents = [] for sheet in sheets: sheet_name = sheet["properties"]["title"] result = ( sheets_service.spreadsheets() .values() .get(spreadsheetId=id, range=sheet_name) .execute() ) values = result.get("values", []) header = values[0] for i, row in enumerate(values[1:], start=1): metadata = { "source": ( f"https://docs.google.com/spreadsheets/d/{id}/" f"edit?gid={sheet['properties']['sheetId']}" ), "title": f"{spreadsheet['properties']['title']} - {sheet_name}", "row": i, } content = [] for j, v in enumerate(row): title = header[j].strip() if len(header) > j else "" content.append(f"{title}: {v.strip()}") page_content = "\n".join(content) documents.append(Document(page_content=page_content, metadata=metadata)) return documents def _load_document_from_id(self, id: str) -> Document:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Load a document from an ID.""" from io import BytesIO from googleapiclient.discovery import build from googleapiclient.errors import HttpError from googleapiclient.http import MediaIoBaseDownload creds = self._load_credentials() service = build("drive", "v3", credentials=creds) file = service.files().get(fileId=id, supportsAllDrives=True).execute() request = service.files().export_media(fileId=id, mimeType="text/plain") fh = BytesIO() downloader = MediaIoBaseDownload(fh, request) done = False try: while done is False: status, done = downloader.next_chunk() except HttpError as e: if e.resp.status == 404: print("File not found: {}".format(id)) else: print("An error occurred: {}".format(e)) text = fh.getvalue().decode("utf-8") metadata = { "source": f"https://docs.google.com/document/d/{id}/edit", "title": f"{file.get('name')}", } return Document(page_content=text, metadata=metadata) def _load_documents_from_folder(self, folder_id: str) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Load documents from a folder.""" from googleapiclient.discovery import build creds = self._load_credentials() service = build("drive", "v3", credentials=creds) files = self._fetch_files_recursive(service, folder_id) returns = [] for file in files: if file["mimeType"] == "application/vnd.google-apps.document": returns.append(self._load_document_from_id(file["id"])) elif file["mimeType"] == "application/vnd.google-apps.spreadsheet": returns.extend(self._load_sheet_from_id(file["id"])) elif file["mimeType"] == "application/pdf": returns.extend(self._load_file_from_id(file["id"])) else: pass return returns def _fetch_files_recursive(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
self, service: Any, folder_id: str ) -> List[Dict[str, Union[str, List[str]]]]: """Fetch all files and subfolders recursively.""" results = ( service.files() .list( q=f"'{folder_id}' in parents", pageSize=1000, includeItemsFromAllDrives=True, supportsAllDrives=True, fields="nextPageToken, files(id, name, mimeType, parents)", ) .execute() ) files = results.get("files", []) returns = [] for file in files: if file["mimeType"] == "application/vnd.google-apps.folder": if self.recursive: returns.extend(self._fetch_files_recursive(service, file["id"])) else: returns.append(file) return returns def _load_documents_from_ids(self) -> List[Document]: """Load documents from a list of IDs.""" if not self.document_ids: raise ValueError("document_ids must be set") return [self._load_document_from_id(doc_id) for doc_id in self.document_ids] def _load_file_from_id(self, id: str) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Load a file from an ID.""" from io import BytesIO from googleapiclient.discovery import build from googleapiclient.http import MediaIoBaseDownload creds = self._load_credentials() service = build("drive", "v3", credentials=creds) file = service.files().get(fileId=id, supportsAllDrives=True).execute() request = service.files().get_media(fileId=id) fh = BytesIO() downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() content = fh.getvalue() from PyPDF2 import PdfReader pdf_reader = PdfReader(BytesIO(content)) return [ Document( page_content=page.extract_text(), metadata={ "source": f"https://drive.google.com/file/d/{id}/view", "title": f"{file.get('name')}", "page": i, }, ) for i, page in enumerate(pdf_reader.pages) ] def _load_file_from_ids(self) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,878
Add the possibility to define what file types you want to load from a Google Drive
### Feature request It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs. ### Motivation The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document". ### Your contribution I'm happy to contribute with a PR.
https://github.com/langchain-ai/langchain/issues/4878
https://github.com/langchain-ai/langchain/pull/4926
dfbf45f028bd282057c5d645c0ebb587fa91dda8
c06a47a691c96fd5065be691df6837143df8ef8f
"2023-05-17T19:46:54Z"
python
"2023-05-18T13:27:53Z"
langchain/document_loaders/googledrive.py
"""Load files from a list of IDs.""" if not self.file_ids: raise ValueError("file_ids must be set") docs = [] for file_id in self.file_ids: docs.extend(self._load_file_from_id(file_id)) return docs def load(self) -> List[Document]: """Load documents.""" if self.folder_id: return self._load_documents_from_folder(self.folder_id) elif self.document_ids: return self._load_documents_from_ids() else: return self._load_file_from_ids()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,479
TextLoader: auto detect file encodings
### Feature request Allow the `TextLoader` to optionally auto detect the loaded file encoding. If the option is enabled the loader will try all detected encodings by order of detection confidence or raise an error. Also enhances the default raised exception to indicate which read path raised the exception. ### Motivation Permits loading large datasets of text files with unknown/arbitrary encodings. ### Your contribution Will submit a PR for this
https://github.com/langchain-ai/langchain/issues/4479
https://github.com/langchain-ai/langchain/pull/4927
8c28ad6daca3420d4428a464cd35f00df8b84f01
e46202829f30cf03ff25254adccef06184ffdcba
"2023-05-10T20:46:24Z"
python
"2023-05-18T13:55:14Z"
langchain/document_loaders/text.py
from typing import List, Optional from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader class TextLoader(BaseLoader): """Load text files.""" def __init__(self, file_path: str, encoding: Optional[str] = None): """Initialize with file path.""" self.file_path = file_path self.encoding = encoding def load(self) -> List[Document]: """Load from file path.""" with open(self.file_path, encoding=self.encoding) as f: text = f.read() metadata = {"source": self.file_path} return [Document(page_content=text, metadata=metadata)]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Wrapper for the GPT4All model.""" from functools import partial from typing import Any, Dict, List, Mapping, Optional, Set from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens class GPT4All(LLM):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
r"""Wrapper around GPT4All language models. To use, you should have the ``pygpt4all`` python package installed, the pre-trained model file, and the model's config information. Example: .. code-block:: python from langchain.llms import GPT4All model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8) # Simplest invocation response = model("Once upon a time, ") """ model: str """Path to the pre-trained GPT4All model file.""" backend: str = Field("llama", alias="backend") n_ctx: int = Field(512, alias="n_ctx") """Token context window.""" n_parts: int = Field(-1, alias="n_parts") """Number of parts to split the model into. If -1, the number of parts is automatically determined.""" seed: int = Field(0, alias="seed") """Seed. If -1, a random seed is used.""" f16_kv: bool = Field(False, alias="f16_kv") """Use half-precision for key/value cache.""" logits_all: bool = Field(False, alias="logits_all") """Return logits for all tokens, not just the last token."""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
vocab_only: bool = Field(False, alias="vocab_only") """Only load the vocabulary, no weights.""" use_mlock: bool = Field(False, alias="use_mlock") """Force system to keep model in RAM.""" embedding: bool = Field(False, alias="embedding") """Use embedding mode only.""" n_threads: Optional[int] = Field(4, alias="n_threads") """Number of threads to use.""" n_predict: Optional[int] = 256 """The maximum number of tokens to generate.""" temp: Optional[float] = 0.8 """The temperature to use for sampling.""" top_p: Optional[float] = 0.95 """The top-p value to use for sampling.""" top_k: Optional[int] = 40 """The top-k value to use for sampling.""" echo: Optional[bool] = False """Whether to echo the prompt.""" stop: Optional[List[str]] = [] """A list of strings to stop generation when encountered.""" repeat_last_n: Optional[int] = 64 "Last n tokens to penalize" repeat_penalty: Optional[float] = 1.3 """The penalty to apply to repeated tokens.""" n_batch: int = Field(1, alias="n_batch") """Batch size for prompt processing.""" streaming: bool = False """Whether to stream the results or not.""" client: Any = None class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Configuration for this pydantic object.""" extra = Extra.forbid def _llama_default_params(self) -> Dict[str, Any]: """Get the identifying parameters.""" return { "n_predict": self.n_predict, "n_threads": self.n_threads, "repeat_last_n": self.repeat_last_n, "repeat_penalty": self.repeat_penalty, "top_k": self.top_k, "top_p": self.top_p, "temp": self.temp, } def _gptj_default_params(self) -> Dict[str, Any]: """Get the identifying parameters.""" return { "n_predict": self.n_predict, "n_threads": self.n_threads, "top_k": self.top_k, "top_p": self.top_p, "temp": self.temp, } @staticmethod def _llama_param_names() -> Set[str]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Get the identifying parameters.""" return { "seed", "n_ctx", "n_parts", "f16_kv", "logits_all", "vocab_only", "use_mlock", "embedding", } @staticmethod def _gptj_param_names() -> Set[str]: """Get the identifying parameters.""" return set() @staticmethod def _model_param_names(backend: str) -> Set[str]: if backend == "llama": return GPT4All._llama_param_names() else: return GPT4All._gptj_param_names() def _default_params(self) -> Dict[str, Any]: if self.backend == "llama": return self._llama_default_params() else: return self._gptj_default_params() @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Validate that the python package exists in the environment.""" try: backend = values["backend"] if backend == "llama": from pygpt4all import GPT4All as GPT4AllModel elif backend == "gptj": from pygpt4all import GPT4All_J as GPT4AllModel else: raise ValueError(f"Incorrect gpt4all backend {cls.backend}") model_kwargs = { k: v for k, v in values.items() if k in GPT4All._model_param_names(backend) } values["client"] = GPT4AllModel( model_path=values["model"], **model_kwargs, ) except ImportError: raise ValueError( "Could not import pygpt4all python package. " "Please install it with `pip install pygpt4all`." ) return values @property def _identifying_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Get the identifying parameters.""" return { "model": self.model, **self._default_params(), **{ k: v for k, v in self.__dict__.items() if k in self._model_param_names(self.backend) }, } @property def _llm_type(self) -> str: """Return the type of llm.""" return "gpt4all" def _call(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,628
GPT4All Python Bindings out of date [move to new multiplatform bindings]
### Feature request The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python ### Motivation The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings. ### Your contribution I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :)
https://github.com/langchain-ai/langchain/issues/4628
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-05-13T15:15:06Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: r"""Call out to GPT4All's generate method. Args: prompt: The prompt to pass into the model. stop: A list of strings to stop generation when encountered. Returns: The string generated by the model. Example: .. code-block:: python prompt = "Once upon a time, " response = model(prompt, n_predict=55) """ text_callback = None if run_manager: text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose) text = "" for token in self.client.generate(prompt, **self._default_params()): if text_callback: text_callback(token) text += token if stop is not None: text = enforce_stop_tokens(text, stop) return text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Wrapper for the GPT4All model.""" from functools import partial from typing import Any, Dict, List, Mapping, Optional, Set from pydantic import Extra, Field, root_validator from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM from langchain.llms.utils import enforce_stop_tokens class GPT4All(LLM):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
r"""Wrapper around GPT4All language models. To use, you should have the ``pygpt4all`` python package installed, the pre-trained model file, and the model's config information. Example: .. code-block:: python from langchain.llms import GPT4All model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8) # Simplest invocation response = model("Once upon a time, ") """ model: str """Path to the pre-trained GPT4All model file.""" backend: str = Field("llama", alias="backend") n_ctx: int = Field(512, alias="n_ctx") """Token context window.""" n_parts: int = Field(-1, alias="n_parts") """Number of parts to split the model into. If -1, the number of parts is automatically determined.""" seed: int = Field(0, alias="seed") """Seed. If -1, a random seed is used.""" f16_kv: bool = Field(False, alias="f16_kv") """Use half-precision for key/value cache.""" logits_all: bool = Field(False, alias="logits_all") """Return logits for all tokens, not just the last token."""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
vocab_only: bool = Field(False, alias="vocab_only") """Only load the vocabulary, no weights.""" use_mlock: bool = Field(False, alias="use_mlock") """Force system to keep model in RAM.""" embedding: bool = Field(False, alias="embedding") """Use embedding mode only.""" n_threads: Optional[int] = Field(4, alias="n_threads") """Number of threads to use.""" n_predict: Optional[int] = 256 """The maximum number of tokens to generate.""" temp: Optional[float] = 0.8 """The temperature to use for sampling.""" top_p: Optional[float] = 0.95 """The top-p value to use for sampling.""" top_k: Optional[int] = 40 """The top-k value to use for sampling.""" echo: Optional[bool] = False """Whether to echo the prompt.""" stop: Optional[List[str]] = [] """A list of strings to stop generation when encountered.""" repeat_last_n: Optional[int] = 64 "Last n tokens to penalize" repeat_penalty: Optional[float] = 1.3 """The penalty to apply to repeated tokens.""" n_batch: int = Field(1, alias="n_batch") """Batch size for prompt processing.""" streaming: bool = False """Whether to stream the results or not.""" client: Any = None class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Configuration for this pydantic object.""" extra = Extra.forbid def _llama_default_params(self) -> Dict[str, Any]: """Get the identifying parameters.""" return { "n_predict": self.n_predict, "n_threads": self.n_threads, "repeat_last_n": self.repeat_last_n, "repeat_penalty": self.repeat_penalty, "top_k": self.top_k, "top_p": self.top_p, "temp": self.temp, } def _gptj_default_params(self) -> Dict[str, Any]: """Get the identifying parameters.""" return { "n_predict": self.n_predict, "n_threads": self.n_threads, "top_k": self.top_k, "top_p": self.top_p, "temp": self.temp, } @staticmethod def _llama_param_names() -> Set[str]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Get the identifying parameters.""" return { "seed", "n_ctx", "n_parts", "f16_kv", "logits_all", "vocab_only", "use_mlock", "embedding", } @staticmethod def _gptj_param_names() -> Set[str]: """Get the identifying parameters.""" return set() @staticmethod def _model_param_names(backend: str) -> Set[str]: if backend == "llama": return GPT4All._llama_param_names() else: return GPT4All._gptj_param_names() def _default_params(self) -> Dict[str, Any]: if self.backend == "llama": return self._llama_default_params() else: return self._gptj_default_params() @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Validate that the python package exists in the environment.""" try: backend = values["backend"] if backend == "llama": from pygpt4all import GPT4All as GPT4AllModel elif backend == "gptj": from pygpt4all import GPT4All_J as GPT4AllModel else: raise ValueError(f"Incorrect gpt4all backend {cls.backend}") model_kwargs = { k: v for k, v in values.items() if k in GPT4All._model_param_names(backend) } values["client"] = GPT4AllModel( model_path=values["model"], **model_kwargs, ) except ImportError: raise ValueError( "Could not import pygpt4all python package. " "Please install it with `pip install pygpt4all`." ) return values @property def _identifying_params(self) -> Mapping[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
"""Get the identifying parameters.""" return { "model": self.model, **self._default_params(), **{ k: v for k, v in self.__dict__.items() if k in self._model_param_names(self.backend) }, } @property def _llm_type(self) -> str: """Return the type of llm.""" return "gpt4all" def _call(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
3,839
Unable to use gpt4all model
Hi Team, I am getting below error while trying to use the gpt4all model, Can someone please advice ? Error: ``` File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call text = self.client.generate( TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback' ``` Code: ``` from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = './models/ggjt-model.bin' # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ```
https://github.com/langchain-ai/langchain/issues/3839
https://github.com/langchain-ai/langchain/pull/4567
e2d7677526bd649461db38396c0c3b21f663f10e
c9e2a0187549f6fa2661b943c13af9d63d44eee1
"2023-04-30T17:49:59Z"
python
"2023-05-18T16:38:54Z"
langchain/llms/gpt4all.py
self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: r"""Call out to GPT4All's generate method. Args: prompt: The prompt to pass into the model. stop: A list of strings to stop generation when encountered. Returns: The string generated by the model. Example: .. code-block:: python prompt = "Once upon a time, " response = model(prompt, n_predict=55) """ text_callback = None if run_manager: text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose) text = "" for token in self.client.generate(prompt, **self._default_params()): if text_callback: text_callback(token) text += token if stop is not None: text = enforce_stop_tokens(text, stop) return text
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Beta Feature: base interface for cache.""" import hashlib import inspect import json from abc import ABC, abstractmethod from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union, cast from sqlalchemy import Column, Integer, String, create_engine, select from sqlalchemy.engine.base import Engine from sqlalchemy.orm import Session try: from sqlalchemy.orm import declarative_base except ImportError: from sqlalchemy.ext.declarative import declarative_base from langchain.embeddings.base import Embeddings from langchain.schema import Generation from langchain.vectorstores.redis import Redis as RedisVectorstore RETURN_VAL_TYPE = List[Generation] def _hash(_input: str) -> str: """Use a deterministic hashing approach.""" return hashlib.md5(_input.encode()).hexdigest() class BaseCache(ABC): """Base interface for cache.""" @abstractmethod def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Look up based on prompt and llm_string.""" @abstractmethod def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: """Update cache based on prompt and llm_string.""" @abstractmethod def clear(self, **kwargs: Any) -> None: """Clear cache that can take additional keyword arguments.""" class InMemoryCache(BaseCache): """Cache that stores things in memory.""" def __init__(self) -> None: """Initialize with empty cache.""" self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {} def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: """Look up based on prompt and llm_string.""" return self._cache.get((prompt, llm_string), None) def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: """Update cache based on prompt and llm_string.""" self._cache[(prompt, llm_string)] = return_val def clear(self, **kwargs: Any) -> None: """Clear cache.""" self._cache = {} Base = declarative_base() class FullLLMCache(Base): """SQLite table for full LLM Cache (all generations).""" __tablename__ = "full_llm_cache" prompt = Column(String, primary_key=True) llm = Column(String, primary_key=True) idx = Column(Integer, primary_key=True) response = Column(String) class SQLAlchemyCache(BaseCache):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Cache that uses SQAlchemy as a backend.""" def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache): """Initialize by creating all tables.""" self.engine = engine self.cache_schema = cache_schema self.cache_schema.metadata.create_all(self.engine) def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: """Look up based on prompt and llm_string.""" stmt = ( select(self.cache_schema.response) .where(self.cache_schema.prompt == prompt) .where(self.cache_schema.llm == llm_string) .order_by(self.cache_schema.idx) ) with Session(self.engine) as session: rows = session.execute(stmt).fetchall() if rows: return [Generation(text=row[0]) for row in rows] return None def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: """Update based on prompt and llm_string.""" items = [ self.cache_schema(prompt=prompt, llm=llm_string, response=gen.text, idx=i) for i, gen in enumerate(return_val) ] with Session(self.engine) as session, session.begin(): for item in items: session.merge(item) def clear(self, **kwargs: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Clear cache.""" with Session(self.engine) as session: session.execute(self.cache_schema.delete()) class SQLiteCache(SQLAlchemyCache): """Cache that uses SQLite as a backend.""" def __init__(self, database_path: str = ".langchain.db"): """Initialize by creating the engine and all tables.""" engine = create_engine(f"sqlite:///{database_path}") super().__init__(engine) class RedisCache(BaseCache): """Cache that uses Redis as a backend.""" def __init__(self, redis_: Any): """Initialize by passing in Redis instance.""" try: from redis import Redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) if not isinstance(redis_, Redis): raise ValueError("Please pass in Redis object.") self.redis = redis_ def _key(self, prompt: str, llm_string: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Compute key from prompt and llm_string""" return _hash(prompt + llm_string) def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: """Look up based on prompt and llm_string.""" generations = [] results = self.redis.hgetall(self._key(prompt, llm_string)) if results: for _, text in results.items(): generations.append(Generation(text=text)) return generations if generations else None def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: """Update cache based on prompt and llm_string.""" key = self._key(prompt, llm_string) self.redis.hset( key, mapping={ str(idx): generation.text for idx, generation in enumerate(return_val) }, ) def clear(self, **kwargs: Any) -> None: """Clear cache. If `asynchronous` is True, flush asynchronously.""" asynchronous = kwargs.get("asynchronous", False) self.redis.flushdb(asynchronous=asynchronous, **kwargs) class RedisSemanticCache(BaseCache):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Cache that uses Redis as a vector-store backend.""" def __init__( self, redis_url: str, embedding: Embeddings, score_threshold: float = 0.2 ): """Initialize by passing in the `init` GPTCache func Args: redis_url (str): URL to connect to Redis. embedding (Embedding): Embedding provider for semantic encoding and search. score_threshold (float, 0.2): Example: .. code-block:: python import langchain from langchain.cache import RedisSemanticCache from langchain.embeddings import OpenAIEmbeddings langchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ) """ self._cache_dict: Dict[str, RedisVectorstore] = {} self.redis_url = redis_url self.embedding = embedding self.score_threshold = score_threshold def _index_name(self, llm_string: str) -> str: hashed_index = _hash(llm_string) return f"cache:{hashed_index}" def _get_llm_cache(self, llm_string: str) -> RedisVectorstore:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
index_name = self._index_name(llm_string) if index_name in self._cache_dict: return self._cache_dict[index_name] try: self._cache_dict[index_name] = RedisVectorstore.from_existing_index( embedding=self.embedding, index_name=index_name, redis_url=self.redis_url, ) except ValueError: redis = RedisVectorstore( embedding_function=self.embedding.embed_query, index_name=index_name, redis_url=self.redis_url, ) _embedding = self.embedding.embed_query(text="test") redis._create_index(dim=len(_embedding)) self._cache_dict[index_name] = redis return self._cache_dict[index_name] def clear(self, **kwargs: Any) -> None: """Clear semantic cache for a given llm_string.""" index_name = self._index_name(kwargs["llm_string"]) if index_name in self._cache_dict: self._cache_dict[index_name].drop_index( index_name=index_name, delete_documents=True, redis_url=self.redis_url ) del self._cache_dict[index_name] def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Look up based on prompt and llm_string.""" llm_cache = self._get_llm_cache(llm_string) generations = [] results = llm_cache.similarity_search_limit_score( query=prompt, k=1, score_threshold=self.score_threshold, ) if results: for document in results: for text in document.metadata["return_val"]: generations.append(Generation(text=text)) return generations if generations else None def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: """Update cache based on prompt and llm_string.""" llm_cache = self._get_llm_cache(llm_string) metadata = { "llm_string": llm_string, "prompt": prompt, "return_val": [generation.text for generation in return_val], } llm_cache.add_texts(texts=[prompt], metadatas=[metadata]) class GPTCache(BaseCache):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Cache that uses GPTCache as a backend.""" def __init__( self, init_func: Union[ Callable[[Any, str], None], Callable[[Any], None], None ] = None, ): """Initialize by passing in init function (default: `None`). Args: init_func (Optional[Callable[[Any], None]]): init `GPTCache` function (default: `None`) Example: .. code-block:: python # Initialize GPTCache with a custom init function import gptcache from gptcache.processor.pre import get_prompt from gptcache.manager.factory import get_data_manager # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: gptcache.Cache, llm str): cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory( manager="map", data_dir=f"map_cache_{llm}" ), ) langchain.llm_cache = GPTCache(init_gptcache) """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
try: import gptcache except ImportError: raise ValueError( "Could not import gptcache python package. " "Please install it with `pip install gptcache`." ) self.init_gptcache_func: Union[ Callable[[Any, str], None], Callable[[Any], None], None ] = init_func self.gptcache_dict: Dict[str, Any] = {} def _new_gptcache(self, llm_string: str) -> Any: """New gptcache object""" from gptcache import Cache from gptcache.manager.factory import get_data_manager from gptcache.processor.pre import get_prompt _gptcache = Cache() if self.init_gptcache_func is not None: sig = inspect.signature(self.init_gptcache_func) if len(sig.parameters) == 2: self.init_gptcache_func(_gptcache, llm_string) else: self.init_gptcache_func(_gptcache) else: _gptcache.init( pre_embedding_func=get_prompt, data_manager=get_data_manager(data_path=llm_string), ) return _gptcache def _get_gptcache(self, llm_string: str) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Get a cache object. When the corresponding llm model cache does not exist, it will be created.""" return self.gptcache_dict.get(llm_string, self._new_gptcache(llm_string)) def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: """Look up the cache data. First, retrieve the corresponding cache object using the `llm_string` parameter, and then retrieve the data from the cache based on the `prompt`. """ from gptcache.adapter.api import get _gptcache = self.gptcache_dict.get(llm_string, None) if _gptcache is None: return None res = get(prompt, cache_obj=_gptcache) if res: return [ Generation(**generation_dict) for generation_dict in json.loads(res) ] return None def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,830
GPTCache keep creating new gptcache cache_obj
### System Info Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9 ### Who can help? @SimFG _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to produce behaviour: ```python from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache # Avoid multiple caches using the same file, causing different llm model caches to affect each other def init_gptcache(cache_obj: Cache, llm str): init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}") langchain.llm_cache = GPTCache(init_gptcache) llm = OpenAI(model_name="text-davinci-002", temperature=0.2) llm("tell me a joke") print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string)) # cached: None ``` the cache won't hits ### Expected behavior the gptcache should have a hit
https://github.com/langchain-ai/langchain/issues/4830
https://github.com/langchain-ai/langchain/pull/4827
c9e2a0187549f6fa2661b943c13af9d63d44eee1
a8ded21b6963b0041e9931f6e397573cb498cbaf
"2023-05-17T03:26:37Z"
python
"2023-05-18T16:42:35Z"
langchain/cache.py
"""Update cache. First, retrieve the corresponding cache object using the `llm_string` parameter, and then store the `prompt` and `return_val` in the cache object. """ from gptcache.adapter.api import put _gptcache = self._get_gptcache(llm_string) handled_data = json.dumps([generation.dict() for generation in return_val]) put(prompt, handled_data, cache_obj=_gptcache) return None def clear(self, **kwargs: Any) -> None: """Clear cache.""" from gptcache import Cache for gptcache_instance in self.gptcache_dict.values(): gptcache_instance = cast(Cache, gptcache_instance) gptcache_instance.flush() self.gptcache_dict.clear()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Wrapper around a Power BI endpoint.""" from __future__ import annotations import logging import os from copy import deepcopy from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Union import aiohttp import requests from aiohttp import ServerTimeoutError from pydantic import BaseModel, Field, root_validator from requests.exceptions import Timeout _LOGGER = logging.getLogger(__name__) BASE_URL = os.getenv("POWERBI_BASE_URL", "https://api.powerbi.com/v1.0/myorg") if TYPE_CHECKING: from azure.core.credentials import TokenCredential class PowerBIDataset(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Create PowerBI engine from dataset ID and credential or token. Use either the credential or a supplied token to authenticate. If both are supplied the credential is used to generate a token. The impersonated_user_name is the UPN of a user to be impersonated. If the model is not RLS enabled, this will be ignored. """ dataset_id: str table_names: List[str] group_id: Optional[str] = None credential: Optional[TokenCredential] = None token: Optional[str] = None impersonated_user_name: Optional[str] = None sample_rows_in_table_info: int = Field(default=1, gt=0, le=10) aiosession: Optional[aiohttp.ClientSession] = None schemas: Dict[str, str] = Field(default_factory=dict, init=False) class Config: """Configuration for this pydantic object.""" arbitrary_types_allowed = True @root_validator(pre=True, allow_reuse=True) def token_or_credential_present(cls, values: Dict[str, Any]) -> Dict[str, Any]: """Validate that at least one of token and credentials is present.""" if "token" in values or "credential" in values: return values raise ValueError("Please provide either a credential or a token.") @property def request_url(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Get the request url.""" if self.group_id: return f"{BASE_URL}/groups/{self.group_id}/datasets/{self.dataset_id}/executeQueries" return f"{BASE_URL}/datasets/{self.dataset_id}/executeQueries" @property def headers(self) -> Dict[str, str]: """Get the token.""" if self.token: return { "Content-Type": "application/json", "Authorization": "Bearer " + self.token, } from azure.core.exceptions import ( ClientAuthenticationError, ) if self.credential: try: token = self.credential.get_token( "https://analysis.windows.net/powerbi/api/.default" ).token return { "Content-Type": "application/json", "Authorization": "Bearer " + token, } except Exception as exc: raise ClientAuthenticationError( "Could not get a token from the supplied credentials." ) from exc raise ClientAuthenticationError("No credential or token supplied.") def get_table_names(self) -> Iterable[str]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Get names of tables available.""" return self.table_names def get_schemas(self) -> str: """Get the available schema's.""" if self.schemas: return ", ".join([f"{key}: {value}" for key, value in self.schemas.items()]) return "No known schema's yet. Use the schema_powerbi tool first." @property def table_info(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Information about all tables in the database.""" return self.get_table_info() def _get_tables_to_query( self, table_names: Optional[Union[List[str], str]] = None ) -> List[str]: """Get the tables names that need to be queried.""" if table_names is not None: if ( isinstance(table_names, list) and len(table_names) > 0 and table_names[0] != "" ): return table_names if isinstance(table_names, str) and table_names != "": return [table_names] return self.table_names def _get_tables_todo(self, tables_todo: List[str]) -> List[str]: """Get the tables that still need to be queried.""" todo = deepcopy(tables_todo) for table in todo: if table not in self.table_names: _LOGGER.warning("Table %s not found in dataset.", table) todo.remove(table) continue if table in self.schemas: todo.remove(table) return todo def _get_schema_for_tables(self, table_names: List[str]) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Create a string of the table schemas for the supplied tables.""" schemas = [ schema for table, schema in self.schemas.items() if table in table_names ] return ", ".join(schemas) def get_table_info( self, table_names: Optional[Union[List[str], str]] = None ) -> str: """Get information about specified tables.""" tables_requested = self._get_tables_to_query(table_names) tables_todo = self._get_tables_todo(tables_requested) for table in tables_todo: if " " in table and not table.startswith("'") and not table.endswith("'"): table = f"'{table}'" try: result = self.run( f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})" ) except Timeout: _LOGGER.warning("Timeout while getting table info for %s", table) self.schemas[table] = "unknown" continue except Exception as exc: _LOGGER.warning("Error while getting table info for %s: %s", table, exc) self.schemas[table] = "unknown" continue self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"]) return self._get_schema_for_tables(tables_requested) async def aget_table_info(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
self, table_names: Optional[Union[List[str], str]] = None ) -> str: """Get information about specified tables.""" tables_requested = self._get_tables_to_query(table_names) tables_todo = self._get_tables_todo(tables_requested) for table in tables_todo: if " " in table and not table.startswith("'") and not table.endswith("'"): table = f"'{table}'" try: result = await self.arun( f"EVALUATE TOPN({self.sample_rows_in_table_info}, {table})" ) except ServerTimeoutError: _LOGGER.warning("Timeout while getting table info for %s", table) self.schemas[table] = "unknown" continue except Exception as exc: _LOGGER.warning("Error while getting table info for %s: %s", table, exc) self.schemas[table] = "unknown" continue self.schemas[table] = json_to_md(result["results"][0]["tables"][0]["rows"]) return self._get_schema_for_tables(tables_requested) def _create_json_content(self, command: str) -> dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Create the json content for the request.""" return { "queries": [{"query": rf"{command}"}], "impersonatedUserName": self.impersonated_user_name, "serializerSettings": {"includeNulls": True}, } def run(self, command: str) -> Any: """Execute a DAX command and return a json representing the results.""" _LOGGER.debug("Running command: %s", command) result = requests.post( self.request_url, json=self._create_json_content(command), headers=self.headers, timeout=10, ) return result.json() async def arun(self, command: str) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
4,325
Power BI Dataset Agent Issue
### System Info We are using the below Power BI Agent guide to try to connect to Power BI dashboard. [Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html) We are able to connect to OpenAI API but facing issues with the below line of code. `powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())` Error: > ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs(). We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side? thanks in advance... ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Same steps mentioned your official PowerBI Dataset Agent documentation ### Expected behavior We should be able to connect to power BI
https://github.com/langchain-ai/langchain/issues/4325
https://github.com/langchain-ai/langchain/pull/4983
e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95
06e524416c18543d5fd4dcbebb9cdf4b56c47db4
"2023-05-08T07:57:11Z"
python
"2023-05-19T15:25:52Z"
langchain/utilities/powerbi.py
"""Execute a DAX command and return the result asynchronously.""" _LOGGER.debug("Running command: %s", command) if self.aiosession: async with self.aiosession.post( self.request_url, headers=self.headers, json=self._create_json_content(command), timeout=10, ) as response: response_json = await response.json() return response_json async with aiohttp.ClientSession() as session: async with session.post( self.request_url, headers=self.headers, json=self._create_json_content(command), timeout=10, ) as response: response_json = await response.json() return response_json def json_to_md(