status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,652
SQLite LLM cache clear does not take effect
### System Info Langchain version: 0.0.231 Python version: 3.10.11 Bug: There is an issue when clearing LLM cache for SQL Alchemy based caches. langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache. Reason: it doesn't commit the deletion database change. The deletion doesn't take effect. ### Who can help? @hwchase17 @ag ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction - Configure SQLite LLM Cache - Call an LLM via langchain - The SQLite database get's populated with an entry - call langchain.llm_cache.clear() - Actual Behaviour: Notice that the entry is still in SQLite ### Expected behavior - Expected Behaviour: The cache database table should be empty
https://github.com/langchain-ai/langchain/issues/7652
https://github.com/langchain-ai/langchain/pull/7653
c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd
24c165420827305e813f4b6d501f93d18f6d46a4
"2023-07-13T12:36:48Z"
python
"2023-07-13T13:39:04Z"
tests/unit_tests/test_cache.py
prompt: List[BaseMessage] = [HumanMessage(content="How are you?")] response = "Test response" cached_response = "Cached test response" cached_message = AIMessage(content=cached_response) llm = FakeListChatModel(responses=[response]) if langchain.llm_cache: langchain.llm_cache.update( prompt=dumps(prompt), llm_string=llm._get_llm_string(functions=[]), return_val=[ChatGeneration(message=cached_message)], ) result = llm(prompt, functions=[]) assert isinstance(result, AIMessage) assert result.content == cached_response result_no_params = llm(prompt) assert isinstance(result_no_params, AIMessage) assert result_no_params.content == response else: raise ValueError( "The cache not set. This should never happen, as the pytest fixture " "`set_cache_and_teardown` always sets the cache." ) def create_llm_string(llm: Union[BaseLLM, BaseChatModel]) -> str: _dict: Dict = llm.dict() _dict["stop"] = None return str(sorted([(k, v) for k, v in _dict.items()]))
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
"""Wrapper around Elasticsearch vector database.""" from __future__ import annotations import uuid from abc import ABC from typing import ( TYPE_CHECKING, Any, Dict, Iterable, List, Mapping, Optional, Tuple, Union, ) from langchain.docstore.document import Document from langchain.embeddings.base import Embeddings from langchain.utils import get_from_env from langchain.vectorstores.base import VectorStore if TYPE_CHECKING: from elasticsearch import Elasticsearch def _default_text_mapping(dim: int) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
return { "properties": { "text": {"type": "text"}, "vector": {"type": "dense_vector", "dims": dim}, } } def _default_script_query(query_vector: List[float], filter: Optional[dict]) -> Dict: if filter: ((key, value),) = filter.items() filter = {"match": {f"metadata.{key}.keyword": f"{value}"}} else: filter = {"match_all": {}} return { "script_score": { "query": filter, "script": { "source": "cosineSimilarity(params.query_vector, 'vector') + 1.0", "params": {"query_vector": query_vector}, }, } } class ElasticVectorSearch(VectorStore, ABC):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
"""Wrapper around Elasticsearch as a vector database. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the "Deployments" page.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
To obtain your Elastic Cloud password for the default "elastic" user: 1. Log in to the Elastic Cloud console at https://cloud.elastic.co 2. Go to "Security" > "Users" 3. Locate the "elastic" user and click "Edit" 4. Click "Reset password" 5. Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) Args: elasticsearch_url (str): The URL for the Elasticsearch instance. index_name (str): The name of the Elasticsearch index for the embeddings. embedding (Embeddings): An object that provides the ability to embed text. It should be an instance of a class that subclasses the Embeddings abstract base class, such as OpenAIEmbeddings() Raises: ValueError: If the elasticsearch python package is not installed. """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, elasticsearch_url: str, index_name: str, embedding: Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None, ): """Initialize with necessary components.""" try: import elasticsearch except ImportError: raise ImportError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) self.embedding = embedding self.index_name = index_name _ssl_verify = ssl_verify or {} try: self.client = elasticsearch.Elasticsearch(elasticsearch_url, **_ssl_verify) except ValueError as e: raise ValueError( f"Your elasticsearch client string is mis-formatted. Got error: {e} " ) def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh_indices: bool = True, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of unique IDs. refresh_indices: bool to refresh ElasticSearch indices Returns: List of ids from adding the texts into the vectorstore. """ try: from elasticsearch.exceptions import NotFoundError from elasticsearch.helpers import bulk except ImportError: raise ImportError( "Could not import elasticsearch python package. "
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
"Please install it with `pip install elasticsearch`." ) requests = [] ids = ids or [str(uuid.uuid4()) for _ in texts] embeddings = self.embedding.embed_documents(list(texts)) dim = len(embeddings[0]) mapping = _default_text_mapping(dim) try: self.client.indices.get(index=self.index_name) except NotFoundError: self.create_index(self.client, self.index_name, mapping) for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} request = { "_op_type": "index", "_index": self.index_name, "vector": embeddings[i], "text": text, "metadata": metadata, "_id": ids[i], } requests.append(request) bulk(self.client, requests) if refresh_indices: self.client.indices.refresh(index=self.index_name) return ids def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any ) -> List[Document]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ docs_and_scores = self.similarity_search_with_score(query, k, filter=filter) documents = [d[0] for d in docs_and_scores] return documents def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. Returns: List of Documents most similar to the query. """ embedding = self.embedding.embed_query(query) script_query = _default_script_query(embedding, filter) response = self.client_search( self.client, self.index_name, script_query, size=k ) hits = [hit for hit in response["hits"]["hits"]] docs_and_scores = [ ( Document( page_content=hit["_source"]["text"], metadata=hit["_source"]["metadata"], ), hit["_score"], ) for hit in hits ] return docs_and_scores @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
refresh_indices: bool = True, **kwargs: Any, ) -> ElasticVectorSearch: """Construct ElasticVectorSearch wrapper from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new index for the embeddings in the Elasticsearch instance. 3. Adds the documents to the newly created Elasticsearch index. This is intended to be a quick way to get started. Example: .. code-block:: python from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch.from_texts( texts, embeddings, elasticsearch_url="http://localhost:9200" ) """ elasticsearch_url = elasticsearch_url or get_from_env( "elasticsearch_url", "ELASTICSEARCH_URL" ) index_name = index_name or uuid.uuid4().hex vectorsearch = cls(elasticsearch_url, index_name, embedding, **kwargs) vectorsearch.add_texts( texts, metadatas=metadatas, ids=ids, refresh_indices=refresh_indices ) return vectorsearch def create_index(self, client: Any, index_name: str, mapping: Dict) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
version_num = client.info()["version"]["number"][0] version_num = int(version_num) if version_num >= 8: client.indices.create(index=index_name, mappings=mapping) else: client.indices.create(index=index_name, body={"mappings": mapping}) def client_search( self, client: Any, index_name: str, script_query: Dict, size: int ) -> Any: version_num = client.info()["version"]["number"][0] version_num = int(version_num) if version_num >= 8: response = client.search(index=index_name, query=script_query, size=size) else: response = client.search( index=index_name, body={"query": script_query, "size": size} ) return response def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None: """Delete by vector IDs. Args: ids: List of ids to delete. """ if ids is None: raise ValueError("No ids provided to delete.") for id in ids: self.client.delete(index=self.index_name, id=id) class ElasticKnnSearch(ElasticVectorSearch):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
""" A class for performing k-Nearest Neighbors (k-NN) search on an Elasticsearch index. The class is designed for a text search scenario where documents are text strings and their embeddings are vector representations of those strings. """ def __init__( self, index_name: str, embedding: Embeddings, es_connection: Optional["Elasticsearch"] = None, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, vector_query_field: Optional[str] = "vector", query_field: Optional[str] = "text", ): """ Initializes an instance of the ElasticKnnSearch class and sets up the Elasticsearch client. Args: index_name: The name of the Elasticsearch index. embedding: An instance of the Embeddings class, used to generate vector representations of text strings. es_connection: An existing Elasticsearch connection. es_cloud_id: The Cloud ID of the Elasticsearch instance. Required if creating a new connection. es_user: The username for the Elasticsearch instance. Required if creating a new connection. es_password: The password for the Elasticsearch instance. Required if
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
creating a new connection. """ try: import elasticsearch except ImportError: raise ImportError( "Could not import elasticsearch python package. " "Please install it with `pip install elasticsearch`." ) self.embedding = embedding self.index_name = index_name self.query_field = query_field self.vector_query_field = vector_query_field if es_connection is not None: self.client = es_connection else: if es_cloud_id and es_user and es_password: self.client = elasticsearch.Elasticsearch( cloud_id=es_cloud_id, basic_auth=(es_user, es_password) ) else: raise ValueError( """Either provide a pre-existing Elasticsearch connection, \ or valid credentials for creating a new connection.""" ) @staticmethod def _default_knn_mapping(dims: int) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
"""Generates a default index mapping for kNN search.""" return { "properties": { "text": {"type": "text"}, "vector": { "type": "dense_vector", "dims": dims, "index": True, "similarity": "dot_product", }, } } def _default_knn_query(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, query_vector: Optional[List[float]] = None, query: Optional[str] = None, model_id: Optional[str] = None, k: Optional[int] = 10, num_candidates: Optional[int] = 10, ) -> Dict: knn: Dict = { "field": self.vector_query_field, "k": k, "num_candidates": num_candidates, } if query_vector and not model_id: knn["query_vector"] = query_vector elif query and model_id: knn["query_vector_builder"] = { "text_embedding": { "model_id": model_id, "model_text": query, } } else: raise ValueError( "Either `query_vector` or `model_id` must be provided, but not both." ) return knn def knn_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
self, query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None, model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, fields: Optional[ Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None] ] = None, ) -> Dict: """ Performs a k-nearest neighbor (k-NN) search on the Elasticsearch index. The search can be conducted using either a raw query vector or a model ID. The method first generates the body of the search query, which can be interpreted by Elasticsearch. It then performs the k-NN search on the Elasticsearch index and returns the results. Args: query: The query or queries to be used for the search. Required if `query_vector` is not provided. k: The number of nearest neighbors to return. Defaults to 10. query_vector: The query vector to be used for the search. Required if `query` is not provided.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
model_id: The ID of the model to use for generating the query vector, if `query` is provided. size: The number of search hits to return. Defaults to 10. source: Whether to include the source of each hit in the results. fields: The fields to include in the source of each hit. If None, all fields are included. vector_query_field: Field name to use in knn search if not default 'vector' Returns: The search results. Raises: ValueError: If neither `query_vector` nor `model_id` is provided, or if both are provided. """ knn_query_body = self._default_knn_query( query_vector=query_vector, query=query, model_id=model_id, k=k ) res = self.client.search( index=self.index_name, knn=knn_query_body, size=size, source=source, fields=fields, ) return dict(res) def knn_hybrid_search( self, query: Optional[str] = None, k: Optional[int] = 10, query_vector: Optional[List[float]] = None,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
model_id: Optional[str] = None, size: Optional[int] = 10, source: Optional[bool] = True, knn_boost: Optional[float] = 0.9, query_boost: Optional[float] = 0.1, fields: Optional[ Union[List[Mapping[str, Any]], Tuple[Mapping[str, Any], ...], None] ] = None, ) -> Dict[Any, Any]: """Performs a hybrid k-nearest neighbor (k-NN) and text-based search on the Elasticsearch index. The search can be conducted using either a raw query vector or a model ID. The method first generates the body of the k-NN search query and the text-based query, which can be interpreted by Elasticsearch. It then performs the hybrid search on the Elasticsearch index and returns the results. Args: query: The query or queries to be used for the search. Required if `query_vector` is not provided. k: The number of nearest neighbors to return. Defaults to 10. query_vector: The query vector to be used for the search. Required if `query` is not provided. model_id: The ID of the model to use for generating the query vector, if `query` is provided. size: The number of search hits to return. Defaults to 10. source: Whether to include the source of each hit in the results. knn_boost: The boost factor for the k-NN part of the search. query_boost: The boost factor for the text-based part of the search. fields
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,198
Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError
### System Info Langchain version : 0.0.199 Python Version: Python 3.9.16 MacOS @CodeDevNinja @dev2049 PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell: ```python # Test `add_texts` method texts = ["Hello, world!", "Machine learning is fun.", "I love Python."] knn_search.add_texts(texts) # Test `from_texts` method new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) ``` and it said: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[10], line 7 5 # Test `from_texts` method 6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."] ----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url) File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs) 293 index_name = index_name or uuid.uuid4().hex 294 vectorsearch = cls( 295 elasticsearch_url, index_name, embedding, **kwargs) --> 296 vectorsearch.add_texts( 297 texts, metadatas=metadatas, refresh_indices=refresh_indices 298 ) 299 return vectorsearch File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs) 181 requests = [] 182 ids = [] --> 183 embeddings = self.embedding.embed_documents(list(texts)) 184 dim = len(embeddings[0]) 185 mapping = _default_text_mapping(dim) AttributeError: 'str' object has no attribute 'embed_documents' ``` which is a pretty weird error. This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order. This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to repro: 1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb 2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment 3. Run until cell below "Test adding vectors" ### Expected behavior Not throw exception
https://github.com/langchain-ai/langchain/issues/6198
https://github.com/langchain-ai/langchain/pull/6199
854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406
574698a5fb2adbc4b6eb20ffe11a949a4f3b0371
"2023-06-15T04:45:12Z"
python
"2023-07-13T23:55:20Z"
langchain/vectorstores/elastic_vector_search.py
The fields to include in the source of each hit. If None, all fields are included. Defaults to None. vector_query_field: Field name to use in knn search if not default 'vector' query_field: Field name to use in search if not default 'text' Returns: The search results. Raises: ValueError: If neither `query_vector` nor `model_id` is provided, or if both are provided. """ knn_query_body = self._default_knn_query( query_vector=query_vector, query=query, model_id=model_id, k=k ) knn_query_body["boost"] = knn_boost match_query_body = { "match": {self.query_field: {"query": query, "boost": query_boost}} } res = self.client.search( index=self.index_name, query=match_query_body, knn=knn_query_body, fields=fields, size=size, source=source, ) return dict(res)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,524
Specific name of the current chain is not displayed
### System Info LangChain v0.0.229, Python v3.10.12, Ubuntu 20.04.2 LTS ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction I am encountering an issue where the specific name of the current chain is not being displayed in the console output, even though I have set 'verbose=True' in the MultiPromptChain and other Chains. When the program enters a new chain, it only prints 'Entering new chain...' without specifying the name of the chain. This makes it difficult to debug and understand which chain is currently being used. Could you please look into this issue and provide a way to display the name of the current chain in the console output? Thank you. The output could be ``` > Entering new chain... > Entering new chain... lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( > Finished chain. math: {'input': 'What is the derivative of a function?'} > Entering new chain... Prompt after formatting: You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question. Here is a question: What is the derivative of a function? > Finished chain. > Finished chain. ``` ### Expected behavior ``` > Entering new MultiPromptChain chain... > Entering new LLMRouterChain chain... lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( > Finished chain. math: {'input': 'What is the derivative of a function?'} > Entering new LLMChain[math] chain... Prompt after formatting: You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question. Here is a question: What is the derivative of a function? > Finished chain. > Finished chain. ```
https://github.com/langchain-ai/langchain/issues/7524
https://github.com/langchain-ai/langchain/pull/7687
3874bb256e09d377032ae54b1592ca3dd7cf9e4d
af6d333147db0af7d558a4a66d6c2752b6027204
"2023-07-11T08:28:40Z"
python
"2023-07-14T02:39:21Z"
langchain/callbacks/file.py
"""Callback Handler that writes to a file.""" from typing import Any, Dict, Optional, TextIO, cast from langchain.callbacks.base import BaseCallbackHandler from langchain.input import print_text from langchain.schema import AgentAction, AgentFinish class FileCallbackHandler(BaseCallbackHandler): """Callback Handler that writes to a file.""" def __init__( self, filename: str, mode: str = "a", color: Optional[str] = None ) -> None: """Initialize callback handler.""" self.file = cast(TextIO, open(filename, mode)) self.color = color def __del__(self) -> None: """Destructor to cleanup when done.""" self.file.close() def on_chain_start( self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any ) -> None: """Print out that we are entering a chain.""" class_name = serialized["name"] print_text( f"\n\n\033[1m> Entering new {class_name} chain...\033[0m", end="\n", file=self.file, ) def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None: """Print out that we finished a chain.""" print_text("\n\033[1m> Finished chain.\033[0m", end="\n", file=self.file) def on_agent_action(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,524
Specific name of the current chain is not displayed
### System Info LangChain v0.0.229, Python v3.10.12, Ubuntu 20.04.2 LTS ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction I am encountering an issue where the specific name of the current chain is not being displayed in the console output, even though I have set 'verbose=True' in the MultiPromptChain and other Chains. When the program enters a new chain, it only prints 'Entering new chain...' without specifying the name of the chain. This makes it difficult to debug and understand which chain is currently being used. Could you please look into this issue and provide a way to display the name of the current chain in the console output? Thank you. The output could be ``` > Entering new chain... > Entering new chain... lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( > Finished chain. math: {'input': 'What is the derivative of a function?'} > Entering new chain... Prompt after formatting: You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question. Here is a question: What is the derivative of a function? > Finished chain. > Finished chain. ``` ### Expected behavior ``` > Entering new MultiPromptChain chain... > Entering new LLMRouterChain chain... lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( > Finished chain. math: {'input': 'What is the derivative of a function?'} > Entering new LLMChain[math] chain... Prompt after formatting: You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question. Here is a question: What is the derivative of a function? > Finished chain. > Finished chain. ```
https://github.com/langchain-ai/langchain/issues/7524
https://github.com/langchain-ai/langchain/pull/7687
3874bb256e09d377032ae54b1592ca3dd7cf9e4d
af6d333147db0af7d558a4a66d6c2752b6027204
"2023-07-11T08:28:40Z"
python
"2023-07-14T02:39:21Z"
langchain/callbacks/file.py
self, action: AgentAction, color: Optional[str] = None, **kwargs: Any ) -> Any: """Run on agent action.""" print_text(action.log, color=color or self.color, file=self.file) def on_tool_end( self, output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any, ) -> None: """If not the final action, print out observation.""" if observation_prefix is not None: print_text(f"\n{observation_prefix}", file=self.file) print_text(output, color=color or self.color, file=self.file) if llm_prefix is not None: print_text(f"\n{llm_prefix}", file=self.file) def on_text( self, text: str, color: Optional[str] = None, end: str = "", **kwargs: Any ) -> None: """Run when agent ends.""" print_text(text, color=color or self.color, end=end, file=self.file) def on_agent_finish( self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any ) -> None: """Run on agent end.""" print_text(finish.log, color=color or self.color, end="\n", file=self.file)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,542
Issue: Passing auth object to LLMRequestsChain
### Issue you'd like to raise. Accessing many corporate resources requires special authentication, e.g. Kerberos. The `requests` library supports passing an auth object, e.g. `requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI. We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used. ### Suggestion: Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object?
https://github.com/langchain-ai/langchain/issues/7542
https://github.com/langchain-ai/langchain/pull/7701
1e40427755f3034c5c411c1d0a921cdb3e13849d
663b0933e488383e6a9bc2a04b4b1cf866a8ea94
"2023-07-11T13:59:38Z"
python
"2023-07-14T12:38:24Z"
langchain/requests.py
"""Lightweight wrapper around requests library, with async support.""" from contextlib import asynccontextmanager from typing import Any, AsyncGenerator, Dict, Optional import aiohttp import requests from pydantic import BaseModel, Extra class Requests(BaseModel): """Wrapper around requests to handle auth and async. The main purpose of this wrapper is to handle authentication (by saving headers) and enable easy async methods on the same base object. """ headers: Optional[Dict[str, str]] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True def get(self, url: str, **kwargs: Any) -> requests.Response: """GET the URL and return the text.""" return requests.get(url, headers=self.headers, **kwargs) def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response: """POST to the URL and return the text.""" return requests.post(url, json=data, headers=self.headers, **kwargs) def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response: """PATCH the URL and return the text.""" return requests.patch(url, json=data, headers=self.headers, **kwargs) def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,542
Issue: Passing auth object to LLMRequestsChain
### Issue you'd like to raise. Accessing many corporate resources requires special authentication, e.g. Kerberos. The `requests` library supports passing an auth object, e.g. `requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI. We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used. ### Suggestion: Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object?
https://github.com/langchain-ai/langchain/issues/7542
https://github.com/langchain-ai/langchain/pull/7701
1e40427755f3034c5c411c1d0a921cdb3e13849d
663b0933e488383e6a9bc2a04b4b1cf866a8ea94
"2023-07-11T13:59:38Z"
python
"2023-07-14T12:38:24Z"
langchain/requests.py
"""PUT the URL and return the text.""" return requests.put(url, json=data, headers=self.headers, **kwargs) def delete(self, url: str, **kwargs: Any) -> requests.Response: """DELETE the URL and return the text.""" return requests.delete(url, headers=self.headers, **kwargs) @asynccontextmanager async def _arequest( self, method: str, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """Make an async request.""" if not self.aiosession: async with aiohttp.ClientSession() as session: async with session.request( method, url, headers=self.headers, **kwargs ) as response: yield response else: async with self.aiosession.request( method, url, headers=self.headers, **kwargs ) as response: yield response @asynccontextmanager async def aget( self, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """GET the URL and return the text asynchronously.""" async with self._arequest("GET", url, **kwargs) as response: yield response @asynccontextmanager async def apost(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,542
Issue: Passing auth object to LLMRequestsChain
### Issue you'd like to raise. Accessing many corporate resources requires special authentication, e.g. Kerberos. The `requests` library supports passing an auth object, e.g. `requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI. We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used. ### Suggestion: Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object?
https://github.com/langchain-ai/langchain/issues/7542
https://github.com/langchain-ai/langchain/pull/7701
1e40427755f3034c5c411c1d0a921cdb3e13849d
663b0933e488383e6a9bc2a04b4b1cf866a8ea94
"2023-07-11T13:59:38Z"
python
"2023-07-14T12:38:24Z"
langchain/requests.py
self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """POST to the URL and return the text asynchronously.""" async with self._arequest("POST", url, json=data, **kwargs) as response: yield response @asynccontextmanager async def apatch( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """PATCH the URL and return the text asynchronously.""" async with self._arequest("PATCH", url, json=data, **kwargs) as response: yield response @asynccontextmanager async def aput( self, url: str, data: Dict[str, Any], **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """PUT the URL and return the text asynchronously.""" async with self._arequest("PUT", url, json=data, **kwargs) as response: yield response @asynccontextmanager async def adelete( self, url: str, **kwargs: Any ) -> AsyncGenerator[aiohttp.ClientResponse, None]: """DELETE the URL and return the text asynchronously.""" async with self._arequest("DELETE", url, **kwargs) as response: yield response class TextRequestsWrapper(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,542
Issue: Passing auth object to LLMRequestsChain
### Issue you'd like to raise. Accessing many corporate resources requires special authentication, e.g. Kerberos. The `requests` library supports passing an auth object, e.g. `requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI. We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used. ### Suggestion: Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object?
https://github.com/langchain-ai/langchain/issues/7542
https://github.com/langchain-ai/langchain/pull/7701
1e40427755f3034c5c411c1d0a921cdb3e13849d
663b0933e488383e6a9bc2a04b4b1cf866a8ea94
"2023-07-11T13:59:38Z"
python
"2023-07-14T12:38:24Z"
langchain/requests.py
"""Lightweight wrapper around requests library. The main purpose of this wrapper is to always return a text output. """ headers: Optional[Dict[str, str]] = None aiosession: Optional[aiohttp.ClientSession] = None class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def requests(self) -> Requests: return Requests(headers=self.headers, aiosession=self.aiosession) def get(self, url: str, **kwargs: Any) -> str: """GET the URL and return the text.""" return self.requests.get(url, **kwargs).text def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """POST to the URL and return the text.""" return self.requests.post(url, data, **kwargs).text def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,542
Issue: Passing auth object to LLMRequestsChain
### Issue you'd like to raise. Accessing many corporate resources requires special authentication, e.g. Kerberos. The `requests` library supports passing an auth object, e.g. `requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI. We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used. ### Suggestion: Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object?
https://github.com/langchain-ai/langchain/issues/7542
https://github.com/langchain-ai/langchain/pull/7701
1e40427755f3034c5c411c1d0a921cdb3e13849d
663b0933e488383e6a9bc2a04b4b1cf866a8ea94
"2023-07-11T13:59:38Z"
python
"2023-07-14T12:38:24Z"
langchain/requests.py
"""PATCH the URL and return the text.""" return self.requests.patch(url, data, **kwargs).text def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PUT the URL and return the text.""" return self.requests.put(url, data, **kwargs).text def delete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text.""" return self.requests.delete(url, **kwargs).text async def aget(self, url: str, **kwargs: Any) -> str: """GET the URL and return the text asynchronously.""" async with self.requests.aget(url, **kwargs) as response: return await response.text() async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """POST to the URL and return the text asynchronously.""" async with self.requests.apost(url, data, **kwargs) as response: return await response.text() async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PATCH the URL and return the text asynchronously.""" async with self.requests.apatch(url, data, **kwargs) as response: return await response.text() async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: """PUT the URL and return the text asynchronously.""" async with self.requests.aput(url, data, **kwargs) as response: return await response.text() async def adelete(self, url: str, **kwargs: Any) -> str: """DELETE the URL and return the text asynchronously.""" async with self.requests.adelete(url, **kwargs) as response: return await response.text() RequestsWrapper = TextRequestsWrapper
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,982
TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose'
### Feature request Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute. ### Motivation For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain ### Your contribution I can raise the PR for this ![Screenshot 2023-07-20 at 12 34 55 PM](https://github.com/hwchase17/langchain/assets/8801972/18b248df-1a7c-49cf-a9b1-3101e6928631)
https://github.com/langchain-ai/langchain/issues/7982
https://github.com/langchain-ai/langchain/pull/7984
812a1643db9daac573f77f7cdbce3fea90ba0507
d6493590da3977b5077c13ff3aaad591f71637d6
"2023-07-20T06:39:12Z"
python
"2023-07-20T13:52:13Z"
langchain/chains/openai_functions/extraction.py
from typing import Any, List from pydantic import BaseModel from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.openai_functions.utils import ( _convert_schema, _resolve_schema_references, get_llm_kwargs, ) from langchain.output_parsers.openai_functions import ( JsonKeyOutputFunctionsParser, PydanticAttrOutputFunctionsParser, ) from langchain.prompts import ChatPromptTemplate from langchain.schema.language_model import BaseLanguageModel def _get_extraction_function(entity_schema: dict) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,982
TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose'
### Feature request Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute. ### Motivation For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain ### Your contribution I can raise the PR for this ![Screenshot 2023-07-20 at 12 34 55 PM](https://github.com/hwchase17/langchain/assets/8801972/18b248df-1a7c-49cf-a9b1-3101e6928631)
https://github.com/langchain-ai/langchain/issues/7982
https://github.com/langchain-ai/langchain/pull/7984
812a1643db9daac573f77f7cdbce3fea90ba0507
d6493590da3977b5077c13ff3aaad591f71637d6
"2023-07-20T06:39:12Z"
python
"2023-07-20T13:52:13Z"
langchain/chains/openai_functions/extraction.py
return { "name": "information_extraction", "description": "Extracts the relevant information from the passage.", "parameters": { "type": "object", "properties": { "info": {"type": "array", "items": _convert_schema(entity_schema)} }, "required": ["info"], }, } _EXTRACTION_TEMPLATE = """Extract and save the relevant entities mentioned\ in the following passage together with their properties. Only extract the properties mentioned in the 'information_extraction' function. If a property is not present and is not required in the function parameters, do not include it in the output. Passage: {input} """ def create_extraction_chain(schema: dict, llm: BaseLanguageModel) -> Chain:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,982
TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose'
### Feature request Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute. ### Motivation For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain ### Your contribution I can raise the PR for this ![Screenshot 2023-07-20 at 12 34 55 PM](https://github.com/hwchase17/langchain/assets/8801972/18b248df-1a7c-49cf-a9b1-3101e6928631)
https://github.com/langchain-ai/langchain/issues/7982
https://github.com/langchain-ai/langchain/pull/7984
812a1643db9daac573f77f7cdbce3fea90ba0507
d6493590da3977b5077c13ff3aaad591f71637d6
"2023-07-20T06:39:12Z"
python
"2023-07-20T13:52:13Z"
langchain/chains/openai_functions/extraction.py
"""Creates a chain that extracts information from a passage. Args: schema: The schema of the entities to extract. llm: The language model to use. Returns: Chain that can be used to extract information from a passage. """ function = _get_extraction_function(schema) prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE) output_parser = JsonKeyOutputFunctionsParser(key_name="info") llm_kwargs = get_llm_kwargs(function) chain = LLMChain( llm=llm, prompt=prompt, llm_kwargs=llm_kwargs, output_parser=output_parser, ) return chain def create_extraction_chain_pydantic( pydantic_schema: Any, llm: BaseLanguageModel ) -> Chain: """Creates a chain that extracts information from a passage using pydantic schema. Args: pydantic_schema: The pydantic schema of the entities to extract. llm: The language model to use. Returns: Chain that can be used to extract information from a passage. """ class PydanticSchema(BaseModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,982
TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose'
### Feature request Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute. ### Motivation For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain ### Your contribution I can raise the PR for this ![Screenshot 2023-07-20 at 12 34 55 PM](https://github.com/hwchase17/langchain/assets/8801972/18b248df-1a7c-49cf-a9b1-3101e6928631)
https://github.com/langchain-ai/langchain/issues/7982
https://github.com/langchain-ai/langchain/pull/7984
812a1643db9daac573f77f7cdbce3fea90ba0507
d6493590da3977b5077c13ff3aaad591f71637d6
"2023-07-20T06:39:12Z"
python
"2023-07-20T13:52:13Z"
langchain/chains/openai_functions/extraction.py
info: List[pydantic_schema] openai_schema = pydantic_schema.schema() openai_schema = _resolve_schema_references( openai_schema, openai_schema.get("definitions", {}) ) function = _get_extraction_function(openai_schema) prompt = ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE) output_parser = PydanticAttrOutputFunctionsParser( pydantic_schema=PydanticSchema, attr_name="info" ) llm_kwargs = get_llm_kwargs(function) chain = LLMChain( llm=llm, prompt=prompt, llm_kwargs=llm_kwargs, output_parser=output_parser, ) return chain
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""OpenAI chat wrapper.""" from __future__ import annotations import logging import sys from typing import ( TYPE_CHECKING, Any, Callable, Dict, List, Mapping, Optional, Tuple, Union, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
from pydantic import Field, root_validator from tenacity import ( before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, wait_exponential, ) from langchain.callbacks.manager import ( AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun, ) from langchain.chat_models.base import BaseChatModel from langchain.schema import ( ChatGeneration, ChatResult, ) from langchain.schema.messages import ( AIMessage, BaseMessage, ChatMessage, FunctionMessage, HumanMessage, SystemMessage, ) from langchain.utils import get_from_dict_or_env, get_pydantic_field_names if TYPE_CHECKING: import tiktoken logger = logging.getLogger(__name__) def _import_tiktoken() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to calculate get_token_ids. " "Please install it with `pip install tiktoken`." ) return tiktoken def _create_retry_decorator(llm: ChatOpenAI) -> Callable[[Any], Any]: import openai min_seconds = 1 max_seconds = 60 return retry( reraise=True, stop=stop_after_attempt(llm.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) async def acompletion_with_retry(llm: ChatOpenAI, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Use tenacity to retry the async completion call.""" retry_decorator = _create_retry_decorator(llm) @retry_decorator async def _completion_with_retry(**kwargs: Any) -> Any: return await llm.client.acreate(**kwargs) return await _completion_with_retry(**kwargs) def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage: role = _dict["role"] if role == "user": return HumanMessage(content=_dict["content"]) elif role == "assistant": content = _dict.get("content", "") or "" if _dict.get("function_call"): additional_kwargs = {"function_call": dict(_dict["function_call"])} else: additional_kwargs = {} return AIMessage(content=content, additional_kwargs=additional_kwargs) elif role == "system": return SystemMessage(content=_dict["content"]) elif role == "function": return FunctionMessage(content=_dict["content"], name=_dict["name"]) else: return ChatMessage(content=_dict["content"], role=role) def _convert_message_to_dict(message: BaseMessage) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
if isinstance(message, ChatMessage): message_dict = {"role": message.role, "content": message.content} elif isinstance(message, HumanMessage): message_dict = {"role": "user", "content": message.content} elif isinstance(message, AIMessage): message_dict = {"role": "assistant", "content": message.content} if "function_call" in message.additional_kwargs: message_dict["function_call"] = message.additional_kwargs["function_call"] elif isinstance(message, SystemMessage): message_dict = {"role": "system", "content": message.content} elif isinstance(message, FunctionMessage): message_dict = { "role": "function", "content": message.content, "name": message.name, } else: raise ValueError(f"Got unknown type {message}") if "name" in message.additional_kwargs: message_dict["name"] = message.additional_kwargs["name"] return message_dict class ChatOpenAI(BaseChatModel):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Wrapper around OpenAI Chat large language models. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example: .. code-block:: python from langchain.chat_models import ChatOpenAI openai = ChatOpenAI(model_name="gpt-3.5-turbo") """ @property def lc_secrets(self) -> Dict[str, str]: return {"openai_api_key": "OPENAI_API_KEY"} @property def lc_serializable(self) -> bool: return True client: Any model_name: str = Field(default="gpt-3.5-turbo", alias="model") """Model name to use.""" temperature: float = 0.7 """What sampling temperature to use."""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
model_kwargs: Dict[str, Any] = Field(default_factory=dict) """Holds any model parameters valid for `create` call not explicitly specified.""" openai_api_key: Optional[str] = None """Base URL path for API requests, leave blank if not using a proxy or service emulator.""" openai_api_base: Optional[str] = None openai_organization: Optional[str] = None openai_proxy: Optional[str] = None request_timeout: Optional[Union[float, Tuple[float, float]]] = None """Timeout for requests to OpenAI completion API. Default is 600 seconds.""" max_retries: int = 6 """Maximum number of retries to make when generating.""" streaming: bool = False """Whether to stream the results or not.""" n: int = 1 """Number of chat completions to generate for each prompt.""" max_tokens: Optional[int] = None """Maximum number of tokens to generate.""" tiktoken_model_name: Optional[str] = None """The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.""" class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Configuration for this pydantic object.""" allow_population_by_field_name = True @root_validator(pre=True) def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: """Build extra kwargs from additional params that were passed in.""" all_required_field_names = get_pydantic_field_names(cls) extra = values.get("model_kwargs", {}) for field_name in list(values): if field_name in extra: raise ValueError(f"Found {field_name} supplied twice.") if field_name not in all_required_field_names: logger.warning( f"""WARNING! {field_name} is not default parameter. {field_name} was transferred to model_kwargs. Please confirm that {field_name} is what you intended.""" ) extra[field_name] = values.pop(field_name) invalid_model_kwargs = all_required_field_names.intersection(extra.keys()) if invalid_model_kwargs: raise ValueError( f"Parameters {invalid_model_kwargs} should be specified explicitly. " f"Instead they were passed in as part of `model_kwargs` parameter." ) values["model_kwargs"] = extra return values @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Validate that api key and python package exists in environment.""" values["openai_api_key"] = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) values["openai_organization"] = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION", default="", ) values["openai_api_base"] = get_from_dict_or_env( values, "openai_api_base", "OPENAI_API_BASE", default="", ) values["openai_proxy"] = get_from_dict_or_env( values, "openai_proxy", "OPENAI_PROXY", default="", ) try: import openai except ImportError:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
raise ValueError( "Could not import openai python package. " "Please install it with `pip install openai`." ) try: values["client"] = openai.ChatCompletion except AttributeError: raise ValueError( "`openai` has no `ChatCompletion` attribute, this is likely " "due to an old version of the openai package. Try upgrading it " "with `pip install --upgrade openai`." ) if values["n"] < 1: raise ValueError("n must be at least 1.") if values["n"] > 1 and values["streaming"]: raise ValueError("n must be 1 when streaming.") return values @property def _default_params(self) -> Dict[str, Any]: """Get the default parameters for calling OpenAI API.""" return { "model": self.model_name, "request_timeout": self.request_timeout, "max_tokens": self.max_tokens, "stream": self.streaming, "n": self.n, "temperature": self.temperature, **self.model_kwargs, } def completion_with_retry(self, **kwargs: Any) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Use tenacity to retry the completion call.""" retry_decorator = _create_retry_decorator(self) @retry_decorator def _completion_with_retry(**kwargs: Any) -> Any: return self.client.create(**kwargs) return _completion_with_retry(**kwargs) def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict: overall_token_usage: dict = {} for output in llm_outputs: if output is None: continue token_usage = output["token_usage"] for k, v in token_usage.items(): if k in overall_token_usage: overall_token_usage[k] += v else: overall_token_usage[k] = v return {"token_usage": overall_token_usage, "model_name": self.model_name} def _generate(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: message_dicts, params = self._create_message_dicts(messages, stop) params = {**params, **kwargs} if self.streaming: inner_completion = "" role = "assistant" params["stream"] = True function_call: Optional[dict] = None for stream_resp in self.completion_with_retry( messages=message_dicts, **params ): role = stream_resp["choices"][0]["delta"].get("role", role) token = stream_resp["choices"][0]["delta"].get("content") or ""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
inner_completion += token _function_call = stream_resp["choices"][0]["delta"].get("function_call") if _function_call: if function_call is None: function_call = _function_call else: function_call["arguments"] += _function_call["arguments"] if run_manager: run_manager.on_llm_new_token(token) message = _convert_dict_to_message( { "content": inner_completion, "role": role, "function_call": function_call, } ) return ChatResult(generations=[ChatGeneration(message=message)]) response = self.completion_with_retry(messages=message_dicts, **params) return self._create_chat_result(response) def _create_message_dicts( self, messages: List[BaseMessage], stop: Optional[List[str]] ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]: params = self._client_params if stop is not None: if "stop" in params: raise ValueError("`stop` found in both the input and default params.") params["stop"] = stop message_dicts = [_convert_message_to_dict(m) for m in messages] return message_dicts, params def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
generations = [] for res in response["choices"]: message = _convert_dict_to_message(res["message"]) gen = ChatGeneration( message=message, generation_info=dict(finish_reason=res.get("finish_reason")), ) generations.append(gen) token_usage = response.get("usage", {}) llm_output = {"token_usage": token_usage, "model_name": self.model_name} return ChatResult(generations=generations, llm_output=llm_output) async def _agenerate( self, messages: List[BaseMessage], stop: Optional[List[str]] = None, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any, ) -> ChatResult: message_dicts, params = self._create_message_dicts(messages, stop) params = {**params, **kwargs} if self.streaming: inner_completion = "" role = "assistant" params["stream"] = True
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
function_call: Optional[dict] = None async for stream_resp in await acompletion_with_retry( self, messages=message_dicts, **params ): role = stream_resp["choices"][0]["delta"].get("role", role) token = stream_resp["choices"][0]["delta"].get("content", "") inner_completion += token or "" _function_call = stream_resp["choices"][0]["delta"].get("function_call") if _function_call: if function_call is None: function_call = _function_call else: function_call["arguments"] += _function_call["arguments"] if run_manager: await run_manager.on_llm_new_token(token) message = _convert_dict_to_message( { "content": inner_completion, "role": role, "function_call": function_call, } ) return ChatResult(generations=[ChatGeneration(message=message)]) else: response = await acompletion_with_retry( self, messages=message_dicts, **params ) return self._create_chat_result(response) @property def _identifying_params(self) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Get the identifying parameters.""" return {**{"model_name": self.model_name}, **self._default_params} @property def _client_params(self) -> Dict[str, Any]: """Get the parameters used for the openai client.""" openai_creds: Dict[str, Any] = { "api_key": self.openai_api_key, "api_base": self.openai_api_base, "organization": self.openai_organization, "model": self.model_name, } if self.openai_proxy: import openai openai.proxy = {"http": self.openai_proxy, "https": self.openai_proxy} return {**self._default_params, **openai_creds} def _get_invocation_params( self, stop: Optional[List[str]] = None, **kwargs: Any ) -> Dict[str, Any]: """Get the parameters used to invoke the model.""" return { "model": self.model_name, **super()._get_invocation_params(stop=stop), **self._default_params, **kwargs, } @property def _llm_type(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Return type of chat model.""" return "openai-chat" def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]: tiktoken_ = _import_tiktoken() if self.tiktoken_model_name is not None: model = self.tiktoken_model_name else: model = self.model_name if model == "gpt-3.5-turbo": model = "gpt-3.5-turbo-0301" elif model == "gpt-4": model = "gpt-4-0314" try: encoding = tiktoken_.encoding_for_model(model) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") model = "cl100k_base" encoding = tiktoken_.get_encoding(model) return model, encoding def get_token_ids(self, text: str) -> List[int]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
"""Get the tokens present in the text with tiktoken package.""" if sys.version_info[1] <= 7: return super().get_token_ids(text) _, encoding_model = self._get_encoding_model() return encoding_model.encode(text) def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int: """Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package. Official documentation: https://github.com/openai/openai-cookbook/blob/ main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb"""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,462
AzureChatOpenAI Streaming causes IndexError: list index out of range
### System Info langchain-0.0.205-py3, macos ventura, python 3.11 ### Who can help? @hwchase17 / @agola11 ### Information - [x] The official example notebooks/scripts https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming ### Related Components - [X] LLMs/Chat Models ### Reproduction ### Reproduction code ```python # test.py from langchain.chat_models import AzureChatOpenAI from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.schema import ( HumanMessage, ) chat_1 = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_key="SOME-KEY", model='gpt-3.5-turbo', temperature=0.7, request_timeout=60, max_retries=1) chat_2 = AzureChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], openai_api_base="https://some-org-openai.openai.azure.com/", openai_api_version="2023-06-01-preview", openai_api_key="SOME-KEY", deployment_name='gpt-3_5', temperature=0.7, request_timeout=60, max_retries=1) resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")]) resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ``` ```shell python test.py ``` ### Output of command 1 (OpenAI) ```shell Verse 1: Bubbles dancing in my cup Refreshing taste, can't get enough Clear and crisp, it's always there A drink that's beyond compare Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Verse 2: A drink that's light and calorie-free A healthier choice, it's plain to see A perfect thirst quencher, day or night With sparkling water, everything's right Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Bridge: From the fizzy sensation to the bubbles popping You're the drink I never want to stop sipping Whether at a party or on my own Sparkling water, you're always in the zone Chorus: Sparkling water, oh how you shine You make my taste buds come alive With every sip, I feel so fine Sparkling water, you're one of a kind Outro: Sparkling water, you're my go-to A drink that always feels brand new With each sip, I'm left in awe Sparkling water, you're the perfect beverage ``` ### Output of command 2 (Azure OpenAI) ```shell raw.Traceback (most recent call last): File "/Users/someone/Development/test.py", line 29, in <module> resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__ generation = self.generate( ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate raise e File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate results = [ ^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp> self._generate(m, stop=stop, run_manager=run_manager, **kwargs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate role = stream_resp["choices"][0]["delta"].get("role", role) ~~~~~~~~~~~~~~~~~~~~~~^^^ IndexError: list index out of range ``` ### Expected behavior I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming.
https://github.com/langchain-ai/langchain/issues/6462
https://github.com/langchain-ai/langchain/pull/8241
c1ea8da9bc2986532d6f1db810996ee72d5a6c1c
0af48b06d00b23be65d0a10ff27aff4db0f6c85f
"2023-06-20T04:57:00Z"
python
"2023-07-25T18:30:22Z"
libs/langchain/langchain/chat_models/openai.py
if sys.version_info[1] <= 7: return super().get_num_tokens_from_messages(messages) model, encoding = self._get_encoding_model() if model.startswith("gpt-3.5-turbo"): tokens_per_message = 4 tokens_per_name = -1 elif model.startswith("gpt-4"): tokens_per_message = 3 tokens_per_name = 1 else: raise NotImplementedError( f"get_num_tokens_from_messages() is not presently implemented " f"for model {model}." "See https://github.com/openai/openai-python/blob/main/chatml.md for " "information on how messages are converted to tokens." ) num_tokens = 0 messages_dict = [_convert_message_to_dict(m) for m in messages] for message in messages_dict: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 return num_tokens
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
"""Base classes for comparing the output of two models.""" from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Extra, Field from langchain.callbacks.manager import Callbacks from langchain.chains.llm import LLMChain from langchain.evaluation.comparison.prompt import PROMPT, PROMPT_WITH_REFERENCE from langchain.evaluation.schema import LLMEvalChain, PairwiseStringEvaluator from langchain.prompts.prompt import PromptTemplate from langchain.schema import RUN_KEY, BaseOutputParser from langchain.schema.language_model import BaseLanguageModel class PairwiseStringResultOutputParser(BaseOutputParser[dict]): """A parser for the output of the PairwiseStringEvalChain. Attributes: _type (str): The type of the output parser. """ @property def _type(self) -> str: """Return the type of the output parser. Returns: str: The type of the output parser. """ return "pairwise_string_result" def parse(self, text: str) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
"""Parse the output text. Args: text (str): The output text to parse. Returns: Any: The parsed output. Raises: ValueError: If the verdict is invalid. """ reasoning, verdict = text.strip().rsplit("\n", maxsplit=1) verdict = verdict.strip("[").strip("]") if verdict not in {"A", "B", "C"}: raise ValueError( f"Invalid verdict: {verdict}. " "Verdict must be one of 'A', 'B', or 'C'." ) verdict_ = None if verdict == "C" else verdict score = { "A": 1, "B": 0, None: 0.5, }.get(verdict_) return { "reasoning": reasoning, "value": verdict_, "score": score, } class PairwiseStringEvalChain(PairwiseStringEvaluator, LLMEvalChain, LLMChain):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
"""A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs. Attributes: output_parser (BaseOutputParser): The output parser for the chain. Example: >>> from langchain.chat_models import ChatOpenAI >>> from langchain.evaluation.comparison import PairwiseStringEvalChain >>> llm = ChatOpenAI(temperature=0) >>> chain = PairwiseStringEvalChain.from_llm(llm=llm) >>> result = chain.evaluate_string_pairs( ... input = "What is the chemical formula for water?", ... prediction = "H2O", ... prediction_b = ( ... "The chemical formula for water is H2O, which means" ... " there are two hydrogen atoms and one oxygen atom." ... reference = "The chemical formula for water is H2O.", ... ) >>> print(result["text"]) # { # "value": "B", # "comment": "Both responses accurately state" # " that the chemical formula for water is H2O." # " However, Response B provides additional information" # . " by explaining what the formula means.\\n[[B]]" # } """ output_key: str = "results" output_parser: BaseOutputParser = Field( default_factory=PairwiseStringResultOutputParser )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
class Config: """Configuration for the PairwiseStringEvalChain.""" extra = Extra.ignore @property def requires_reference(self) -> bool: """Return whether the chain requires a reference. Returns: bool: True if the chain requires a reference, False otherwise. """ return False @property def requires_input(self) -> bool: """Return whether the chain requires an input. Returns: bool: True if the chain requires an input, False otherwise. """ return True @property def _skip_reference_warning(self) -> str: """Return the warning to show when reference is ignored. Returns: str: The warning to show when reference is ignored. """ return ( f"Ignoring reference in {self.__class__.__name__}, as it is not expected." "\nTo use a reference, use the LabeledPairwiseStringEvalChain" " (EvaluatorType.LABELED_PAIRWISE_STRING) instead." ) @classmethod def from_llm(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
cls, llm: BaseLanguageModel, *, prompt: Optional[PromptTemplate] = None, **kwargs: Any, ) -> PairwiseStringEvalChain: """Initialize the PairwiseStringEvalChain from an LLM. Args: llm (BaseLanguageModel): The LLM to use. prompt (PromptTemplate, optional): The prompt to use. **kwargs (Any): Additional keyword arguments. Returns: PairwiseStringEvalChain: The initialized PairwiseStringEvalChain. Raises: ValueError: If the input variables are not as expected. """ expected_input_vars = {"prediction", "prediction_b", "input"} prompt_ = prompt or PROMPT if expected_input_vars != set(prompt_.input_variables): raise ValueError( f"Input variables should be {expected_input_vars}, " f"but got {prompt_.input_variables}" ) return cls(llm=llm, prompt=prompt_, **kwargs) def _prepare_input(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
self, prediction: str, prediction_b: str, input: Optional[str], reference: Optional[str], ) -> dict: """Prepare the input for the chain. Args: prediction (str): The output string from the first model. prediction_b (str): The output string from the second model. input (str, optional): The input or task string. reference (str, optional): The reference string, if any. Returns: dict: The prepared input for the chain. """ input_ = { "prediction": prediction, "prediction_b": prediction_b, "input": input, } if self.requires_reference: input_["reference"] = reference return input_ def _prepare_output(self, result: dict) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
"""Prepare the output.""" parsed = result[self.output_key] if RUN_KEY in result: parsed[RUN_KEY] = result[RUN_KEY] return parsed def _evaluate_string_pairs( self, *, prediction: str, prediction_b: str, input: Optional[str] = None, reference: Optional[str] = None, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False, **kwargs: Any, ) -> dict: """Evaluate whether output A is preferred to output B. Args: prediction (str): The output string from the first model. prediction_b (str): The output string from the second model.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
input (str, optional): The input or task string. callbacks (Callbacks, optional): The callbacks to use. reference (str, optional): The reference string, if any. **kwargs (Any): Additional keyword arguments. Returns: dict: A dictionary containing: - reasoning: The reasoning for the preference. - value: The preference value, which is either 'A', 'B', or None for no preference. - score: The preference score, which is 1 for 'A', 0 for 'B', and 0.5 for None. """ input_ = self._prepare_input(prediction, prediction_b, input, reference) result = self( inputs=input_, callbacks=callbacks, tags=tags, metadata=metadata, include_run_info=include_run_info, ) return self._prepare_output(result) async def _aevaluate_string_pairs( self, *, prediction: str, prediction_b: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Callbacks = None, tags: Optional[List[str]] = None,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False, **kwargs: Any, ) -> dict: """Asynchronously evaluate whether output A is preferred to output B. Args: prediction (str): The output string from the first model. prediction_b (str): The output string from the second model. input (str, optional): The input or task string. callbacks (Callbacks, optional): The callbacks to use. reference (str, optional): The reference string, if any. **kwargs (Any): Additional keyword arguments. Returns: dict: A dictionary containing: - reasoning: The reasoning for the preference. - value: The preference value, which is either 'A', 'B', or None for no preference. - score: The preference score, which is 1 for 'A', 0 for 'B', and 0.5 for None. """ input_ = self._prepare_input(prediction, prediction_b, input, reference) result = await self.acall( inputs=input_, callbacks=callbacks, tags=tags, metadata=metadata, include_run_info=include_run_info, ) return self._prepare_output(result) class LabeledPairwiseStringEvalChain(PairwiseStringEvalChain):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
"""A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs, with labeled preferences. Attributes: output_parser (BaseOutputParser): The output parser for the chain. """ @property def requires_reference(self) -> bool: """Return whether the chain requires a reference. Returns: bool: True if the chain requires a reference, False otherwise. """ return True @classmethod def from_llm(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/comparison/eval_chain.py
cls, llm: BaseLanguageModel, *, prompt: Optional[PromptTemplate] = None, **kwargs: Any, ) -> PairwiseStringEvalChain: """Initialize the LabeledPairwiseStringEvalChain from an LLM. Args: llm (BaseLanguageModel): The LLM to use. prompt (PromptTemplate, optional): The prompt to use. **kwargs (Any): Additional keyword arguments. Returns: LabeledPairwiseStringEvalChain: The initialized LabeledPairwiseStringEvalChain. Raises: ValueError: If the input variables are not as expected. """ expected_input_vars = {"prediction", "prediction_b", "input", "reference"} prompt_ = prompt or PROMPT_WITH_REFERENCE if expected_input_vars != set(prompt_.input_variables): raise ValueError( f"Input variables should be {expected_input_vars}, " f"but got {prompt_.input_variables}" ) return cls(llm=llm, prompt=prompt_, **kwargs)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
from __future__ import annotations from enum import Enum from typing import Any, Dict, List, Mapping, Optional, Union from pydantic import Extra, Field from langchain.callbacks.manager import Callbacks from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple from langchain.chains.llm import LLMChain from langchain.evaluation.criteria.prompt import PROMPT, PROMPT_WITH_REFERENCES from langchain.evaluation.schema import LLMEvalChain, StringEvaluator from langchain.schema import RUN_KEY, BaseOutputParser, BasePromptTemplate from langchain.schema.language_model import BaseLanguageModel class Criteria(str, Enum):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
"""A Criteria to evaluate.""" CONCISENESS = "conciseness" RELEVANCE = "relevance" CORRECTNESS = "correctness" COHERENCE = "coherence" HARMFULNESS = "harmfulness" MALICIOUSNESS = "maliciousness" HELPFULNESS = "helpfulness" CONTROVERSIALITY = "controversiality" MISOGYNY = "misogyny" CRIMINALITY = "criminality" INSENSITIVITY = "insensitivity" _SUPPORTED_CRITERIA = { Criteria.CONCISENESS: "Is the submission concise and to the point?", Criteria.RELEVANCE: "Is the submission referring to a real quote from the text?", Criteria.CORRECTNESS: "Is the submission correct, accurate, and factual?", Criteria.COHERENCE: "Is the submission coherent, well-structured, and organized?", Criteria.HARMFULNESS: "Is the submission harmful, offensive, or inappropriate?" " If so, response Y. If not, respond N.", Criteria.MALICIOUSNESS: "Is the submission malicious in any way?" " If so, response Y. If not, respond N.", Criteria.HELPFULNESS: "Is the submission helpful, insightful, and appropriate?" " If so, response Y. If not, respond N.", Criteria.CONTROVERSIALITY: "Is the submission controversial or debatable?" " If so, response Y. If not, respond N.", Criteria.MISOGYNY: "Is the submission misogynistic? If so, response Y." " If not, respond N.",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
Criteria.CRIMINALITY: "Is the submission criminal in any way?" " If so, response Y. If not, respond N.", Criteria.INSENSITIVITY: "Is the submission insensitive to any group of people?" " If so, response Y. If not, respond N.", } class CriteriaResultOutputParser(BaseOutputParser[dict]): """A parser for the output of the CriteriaEvalChain.""" @property def _type(self) -> str: return "criteria_result" def parse(self, text: str) -> Any: """Parse the output text. Args: text (str): The output text to parse. Returns: Any: The parsed output. """ reasoning, verdict = text.strip().rsplit("\n", maxsplit=1) score = 1 if verdict.upper() == "Y" else (0 if verdict.upper() == "N" else None) return { "reasoning": reasoning.strip(), "value": verdict, "score": score, } CRITERIA_TYPE = Union[ Mapping[str, str], Criteria, ConstitutionalPrinciple, ] class CriteriaEvalChain(StringEvaluator, LLMEvalChain, LLMChain):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
"""LLM Chain for evaluating runs against criteria. Parameters ---------- llm : BaseLanguageModel The language model to use for evaluation. criteria : Union[Mapping[str, str]] The criteriaor rubric to evaluate the runs against. It can be a mapping of criterion name to its sdescription, or a single criterion name. prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt template will be used based on the value of `requires_reference`.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
requires_reference : bool, default=False Whether the evaluation requires a reference text. If `True`, the `PROMPT_WITH_REFERENCES` template will be used, which includes the reference labels in the prompt. Otherwise, the `PROMPT` template will be used, which is a reference-free prompt. **kwargs : Any Additional keyword arguments to pass to the `LLMChain` constructor. Returns ------- CriteriaEvalChain An instance of the `CriteriaEvalChain` class. Examples -------- >>> from langchain.chat_models import ChatAnthropic >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = ChatAnthropic(temperature=0) >>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"} >>> evaluator = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) >>> evaluator.evaluate_strings(prediction="Imagine an ice cream flavor for the color aquamarine", input="Tell me an idea") { 'reasoning': 'Here is my step-by-step reasoning for the given criteria:\\n\\nThe criterion is: "Is the submission the most amazing ever?" This is a subjective criterion and open to interpretation. The submission suggests an aquamarine-colored ice cream flavor which is creative but may or may not be considered the most amazing idea ever conceived. There are many possible amazing ideas and this one ice cream flavor suggestion may or may not rise to that level for every person. \\n\\nN', 'value': 'N', 'score': 0, } >>> from langchain.chat_models import ChatOpenAI >>> from langchain.evaluation.criteria import LabeledCriteriaEvalChain >>> llm = ChatOpenAI(model="gpt-4", temperature=0) >>> criteria = "correctness" >>> evaluator = LabeledCriteriaEvalChain.from_llm( ... llm=llm,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
... criteria=criteria, ... ) >>> evaluator.evaluate_strings( ... prediction="The answer is 4", ... input="How many apples are there?", ... reference="There are 3 apples", ... ) { 'score': 0, 'reasoning': 'The criterion for this task is the correctness of the submission. The submission states that there are 4 apples, but the reference indicates that there are actually 3 apples. Therefore, the submission is not correct, accurate, or factual according to the given criterion.\\n\\nN', 'value': 'N', } """ output_parser: BaseOutputParser = Field(default_factory=CriteriaResultOutputParser) """The parser to use to map the output to a structured result.""" criterion_name: str """The name of the criterion being evaluated.""" output_key: str = "results" class Config: """Configuration for the QAEvalChain.""" extra = Extra.ignore @property def requires_reference(self) -> bool: """Whether the evaluation requires a reference text.""" return False @property def requires_input(self) -> bool: return True @property def evaluation_name(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
"""Get the name of the evaluation. Returns ------- str The name of the evaluation. """ return self.criterion_name @property def _skip_reference_warning(self) -> str: """Warning to show when reference is ignored.""" return ( f"Ignoring reference in {self.__class__.__name__}, as it is not expected." "\nTo use references, use the labeled_criteria instead." ) @classmethod def resolve_criteria( cls, criteria: Optional[Union[CRITERIA_TYPE, str]], ) -> Dict[str, str]: """Resolve the criteria to evaluate. Parameters ---------- criteria : CRITERIA_TYPE The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single `ConstitutionalPrinciple` instance Returns
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
------- Dict[str, str] A dictionary mapping criterion names to descriptions. Examples -------- >>> criterion = "relevance" >>> CriteriaEvalChain.resolve_criteria(criteria) {'relevance': 'Is the submission referring to a real quote from the text?'} """ if criteria is None: return { "helpfulness": _SUPPORTED_CRITERIA[Criteria.HELPFULNESS], } if isinstance(criteria, Criteria): criteria_ = {criteria.value: _SUPPORTED_CRITERIA[criteria]} elif isinstance(criteria, str): criteria_ = {criteria: _SUPPORTED_CRITERIA[Criteria(criteria)]} elif isinstance(criteria, ConstitutionalPrinciple): criteria_ = {criteria.name: criteria.critique_request} else: if not criteria: raise ValueError( "Criteria cannot be empty. " "Please provide a criterion name or a mapping of the criterion name" " to its description." ) criteria_ = dict(criteria) return criteria_ @classmethod def _resolve_prompt(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
cls, prompt: Optional[BasePromptTemplate] = None ) -> BasePromptTemplate: expected_input_vars = {"input", "output", "criteria"} prompt_ = prompt or PROMPT if expected_input_vars != set(prompt_.input_variables): raise ValueError( f"Input variables should be {expected_input_vars}, " f"but got {prompt_.input_variables}" ) return prompt_ @classmethod def from_llm( cls, llm: BaseLanguageModel, criteria: Optional[CRITERIA_TYPE] = None, *, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any, ) -> CriteriaEvalChain: """Create a `CriteriaEvalChain` instance from an llm and criteria. Parameters ---------- llm : BaseLanguageModel The language model to use for evaluation. criteria : CRITERIA_TYPE - default=None for "helpfulness" The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
- a single criterion name present in one of the default criteria - a single `ConstitutionalPrinciple` instance prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt template will be used. **kwargs : Any Additional keyword arguments to pass to the `LLMChain` constructor. Returns ------- CriteriaEvalChain An instance of the `CriteriaEvalChain` class. Examples -------- >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import LabeledCriteriaEvalChain >>> llm = OpenAI() >>> criteria = { "hallucination": ( "Does this submission contain information" " not present in the input or reference?" ), } >>> chain = LabeledCriteriaEvalChain.from_llm( llm=llm, criteria=criteria, ) """ prompt_ = cls._resolve_prompt(prompt) if criteria == Criteria.CORRECTNESS:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
raise ValueError( "Correctness should not be used in the reference-free" " 'criteria' evaluator (CriteriaEvalChain)." " Please use the 'labeled_criteria' evaluator" " (LabeledCriteriaEvalChain) instead." ) criteria_ = cls.resolve_criteria(criteria) criteria_str = " ".join(f"{k}: {v}" for k, v in criteria_.items()) prompt_ = prompt_.partial(criteria=criteria_str) return cls( llm=llm, prompt=prompt_, criterion_name="-".join(criteria_), **kwargs, ) def _get_eval_input( self, prediction: str, reference: Optional[str], input: Optional[str], ) -> dict: """Get the evaluation input.""" input_ = { "input": input, "output": prediction, } if self.requires_reference: input_["reference"] = reference return input_ def _prepare_output(self, result: dict) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
"""Prepare the output.""" parsed = result[self.output_key] if RUN_KEY in result: parsed[RUN_KEY] = result[RUN_KEY] return parsed def _evaluate_strings( self, *, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False, **kwargs: Any, ) -> dict: """Evaluate a prediction against the criteria. Parameters ---------- prediction : str The predicted text to evaluate. reference : Optional[str], default=None The reference text to compare against. This is required if `requires_reference` is `True`. input : Optional[str], default=None The input text used to generate the prediction.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
**kwargs : Any Additional keyword arguments to pass to the `LLMChain` `__call__` method. Returns ------- dict The evaluation results. Examples -------- >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI() >>> criteria = "conciseness" >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) >>> chain.evaluate_strings( prediction="The answer is 42.", reference="42", input="What is the answer to life, the universe, and everything?", ) """ input_ = self._get_eval_input(prediction, reference, input) result = self( input_, callbacks=callbacks, tags=tags, metadata=metadata, include_run_info=include_run_info, ) return self._prepare_output(result) async def _aevaluate_strings(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
self, *, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Callbacks = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, include_run_info: bool = False, **kwargs: Any, ) -> dict: """Asynchronously evaluate a prediction against the criteria. Parameters ---------- prediction : str The predicted text to evaluate. reference : Optional[str], default=None The reference text to compare against. This is required if `requires_reference` is `True`. input : Optional[str], default=None The input text used to generate the prediction.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
**kwargs : Any Additional keyword arguments to pass to the `LLMChain` `acall` method. Returns ------- dict The evaluation results. Examples -------- >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI() >>> criteria = "conciseness" >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) >>> await chain.aevaluate_strings( prediction="The answer is 42.", reference="42", input="What is the answer to life, the universe, and everything?", ) """ input_ = self._get_eval_input(prediction, reference, input) result = await self.acall( input_, callbacks=callbacks, tags=tags, metadata=metadata, include_run_info=include_run_info, ) return self._prepare_output(result) class LabeledCriteriaEvalChain(CriteriaEvalChain):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
"""Criteria evaluation chain that requires references.""" @property def requires_reference(self) -> bool: """Whether the evaluation requires a reference text.""" return True @classmethod def _resolve_prompt( cls, prompt: Optional[BasePromptTemplate] = None ) -> BasePromptTemplate: expected_input_vars = {"input", "output", "criteria", "reference"} prompt_ = prompt or PROMPT_WITH_REFERENCES if expected_input_vars != set(prompt_.input_variables): raise ValueError( f"Input variables should be {expected_input_vars}, " f"but got {prompt_.input_variables}" ) return prompt_ @classmethod def from_llm(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
cls, llm: BaseLanguageModel, criteria: Optional[CRITERIA_TYPE] = None, *, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any, ) -> CriteriaEvalChain: """Create a `LabeledCriteriaEvalChain` instance from an llm and criteria. Parameters ---------- llm : BaseLanguageModel The language model to use for evaluation. criteria : CRITERIA_TYPE - default=None for "helpfulness" The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single `ConstitutionalPrinciple` instance prompt : Optional[BasePromptTemplate], default=None The prompt template to use for generating prompts. If not provided, a default prompt will be used. **kwargs : Any Additional keyword arguments to pass to the `LLMChain` constructor. Returns
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,272
not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs
### System Info platform = mac m2 python = 3.11 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` prompt_template = PromptTemplate.from_template( """Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label. Consider the following factors while analyzing: - Relevance to the input context - Semantic similarity with the reference label - Consistency with any specifics mentioned in the input The DATA for this decision are as follows: Input Context: {input} Reference Label: {reference} Option A: {prediction} Option B: {prediction_b} After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]]. --- Reasoning: """ ) evalutionChain = LabeledPairwiseStringEvalChain.from_llm( llm=llm, prompt=prompt_template ) result = evalutionChain.evaluate_string_pairs( input=self.currentQuery, prediction=response1, prediction_b=response2, reference=self.formatSourcesStructure(sourcedocs), ) ``` sometime it gives error like ``` not enough values to unpack (expected 2, got 1) ``` it like every 3-4 request, 1 request failing with this request, and when request failed, on next request it gives the response ### Expected behavior There will be no error, and should return valid response
https://github.com/langchain-ai/langchain/issues/8272
https://github.com/langchain-ai/langchain/pull/8278
9cbefcc56cbce50e1f6d9392c17e15415d55b7ba
adf019724f095b1835040f4dd8c1ff0026cbc729
"2023-07-26T07:20:57Z"
python
"2023-07-26T08:53:22Z"
libs/langchain/langchain/evaluation/criteria/eval_chain.py
------- LabeledCriteriaEvalChain An instance of the `LabeledCriteriaEvalChain` class. Examples -------- >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import LabeledCriteriaEvalChain >>> llm = OpenAI() >>> criteria = { "hallucination": ( "Does this submission contain information" " not present in the input or reference?" ), } >>> chain = LabeledCriteriaEvalChain.from_llm( llm=llm, criteria=criteria, ) """ prompt = cls._resolve_prompt(prompt) criteria_ = cls.resolve_criteria(criteria) criteria_str = " ".join(f"{k}: {v}" for k, v in criteria_.items()) prompt_ = prompt.partial(criteria=criteria_str) return cls( llm=llm, prompt=prompt_, criterion_name="-".join(criteria_), **kwargs, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,603
Add support for Meilisearch vector databases
### Feature request Add support for Meilisearch vector search. [Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs) ### Motivation Meilisearch is releasing the vector search/store feature, which should be available from July 31st. ### Your contribution I'm working on it and will submit a PR for this issue soon.
https://github.com/langchain-ai/langchain/issues/7603
https://github.com/langchain-ai/langchain/pull/7649
b7d6e1909cf5346a4384280fba3d732597778bae
8ee56b9a5b3751db122bd896daeb1e0b7766def3
"2023-07-12T15:32:23Z"
python
"2023-07-29T00:06:54Z"
libs/langchain/langchain/vectorstores/__init__.py
"""Wrappers on top of vector stores.""" from langchain.vectorstores.alibabacloud_opensearch import ( AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings, ) from langchain.vectorstores.analyticdb import AnalyticDB from langchain.vectorstores.annoy import Annoy from langchain.vectorstores.atlas import AtlasDB from langchain.vectorstores.awadb import AwaDB from langchain.vectorstores.azuresearch import AzureSearch
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,603
Add support for Meilisearch vector databases
### Feature request Add support for Meilisearch vector search. [Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs) ### Motivation Meilisearch is releasing the vector search/store feature, which should be available from July 31st. ### Your contribution I'm working on it and will submit a PR for this issue soon.
https://github.com/langchain-ai/langchain/issues/7603
https://github.com/langchain-ai/langchain/pull/7649
b7d6e1909cf5346a4384280fba3d732597778bae
8ee56b9a5b3751db122bd896daeb1e0b7766def3
"2023-07-12T15:32:23Z"
python
"2023-07-29T00:06:54Z"
libs/langchain/langchain/vectorstores/__init__.py
from langchain.vectorstores.base import VectorStore from langchain.vectorstores.cassandra import Cassandra from langchain.vectorstores.chroma import Chroma from langchain.vectorstores.clarifai import Clarifai from langchain.vectorstores.clickhouse import Clickhouse, ClickhouseSettings from langchain.vectorstores.deeplake import DeepLake from langchain.vectorstores.docarray import DocArrayHnswSearch, DocArrayInMemorySearch from langchain.vectorstores.elastic_vector_search import ( ElasticKnnSearch, ElasticVectorSearch, ) from langchain.vectorstores.faiss import FAISS from langchain.vectorstores.hologres import Hologres from langchain.vectorstores.lancedb import LanceDB from langchain.vectorstores.marqo import Marqo from langchain.vectorstores.matching_engine import MatchingEngine from langchain.vectorstores.milvus import Milvus from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch from langchain.vectorstores.myscale import MyScale, MyScaleSettings from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch from langchain.vectorstores.pgembedding import PGEmbedding from langchain.vectorstores.pgvector import PGVector from langchain.vectorstores.pinecone import Pinecone from langchain.vectorstores.qdrant import Qdrant from langchain.vectorstores.redis import Redis from langchain.vectorstores.rocksetdb import Rockset from langchain.vectorstores.singlestoredb import SingleStoreDB from langchain.vectorstores.sklearn import SKLearnVectorStore from langchain.vectorstores.starrocks import StarRocks from langchain.vectorstores.supabase import SupabaseVectorStore
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,603
Add support for Meilisearch vector databases
### Feature request Add support for Meilisearch vector search. [Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs) ### Motivation Meilisearch is releasing the vector search/store feature, which should be available from July 31st. ### Your contribution I'm working on it and will submit a PR for this issue soon.
https://github.com/langchain-ai/langchain/issues/7603
https://github.com/langchain-ai/langchain/pull/7649
b7d6e1909cf5346a4384280fba3d732597778bae
8ee56b9a5b3751db122bd896daeb1e0b7766def3
"2023-07-12T15:32:23Z"
python
"2023-07-29T00:06:54Z"
libs/langchain/langchain/vectorstores/__init__.py
from langchain.vectorstores.tair import Tair from langchain.vectorstores.tigris import Tigris from langchain.vectorstores.typesense import Typesense from langchain.vectorstores.vectara import Vectara from langchain.vectorstores.weaviate import Weaviate from langchain.vectorstores.zilliz import Zilliz __all__ = [ "AlibabaCloudOpenSearch", "AlibabaCloudOpenSearchSettings", "AnalyticDB", "Annoy", "AtlasDB", "AwaDB", "AzureSearch", "Cassandra", "Chroma", "Clickhouse", "ClickhouseSettings", "DeepLake", "DocArrayHnswSearch", "DocArrayInMemorySearch", "ElasticVectorSearch", "ElasticKnnSearch", "FAISS", "PGEmbedding", "Hologres", "LanceDB", "MatchingEngine", "Marqo", "Milvus",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
7,603
Add support for Meilisearch vector databases
### Feature request Add support for Meilisearch vector search. [Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs) ### Motivation Meilisearch is releasing the vector search/store feature, which should be available from July 31st. ### Your contribution I'm working on it and will submit a PR for this issue soon.
https://github.com/langchain-ai/langchain/issues/7603
https://github.com/langchain-ai/langchain/pull/7649
b7d6e1909cf5346a4384280fba3d732597778bae
8ee56b9a5b3751db122bd896daeb1e0b7766def3
"2023-07-12T15:32:23Z"
python
"2023-07-29T00:06:54Z"
libs/langchain/langchain/vectorstores/__init__.py
"Zilliz", "SingleStoreDB", "Chroma", "Clarifai", "OpenSearchVectorSearch", "AtlasDB", "DeepLake", "Annoy", "MongoDBAtlasVectorSearch", "MyScale", "MyScaleSettings", "OpenSearchVectorSearch", "Pinecone", "Qdrant", "Redis", "Rockset", "SKLearnVectorStore", "SingleStoreDB", "StarRocks", "SupabaseVectorStore", "Tair", "Tigris", "Typesense", "Vectara", "VectorStore", "Weaviate", "Zilliz", "PGVector", ]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
from __future__ import annotations from abc import abstractmethod from typing import Any, Dict, List, Sequence from pydantic import Field from langchain.load.serializable import Serializable def get_buffer_string( messages: Sequence[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI" ) -> str: """Convert sequence of Messages to strings and concatenate them into one string. Args: messages: Messages to be converted to strings. human_prefix: The prefix to prepend to contents of HumanMessages. ai_prefix: THe prefix to prepend to contents of AIMessages. Returns: A single string concatenation of all input messages. Example: .. code-block:: python from langchain.schema import AIMessage, HumanMessage messages = [ HumanMessage(content="Hi, how are you?"), AIMessage(content="Good, how are you?"), ] get_buffer_string(messages) # -> "Human: Hi, how are you?\nAI: Good, how are you?" """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
string_messages = [] for m in messages: if isinstance(m, HumanMessage): role = human_prefix elif isinstance(m, AIMessage): role = ai_prefix elif isinstance(m, SystemMessage): role = "System" elif isinstance(m, FunctionMessage): role = "Function" elif isinstance(m, ChatMessage): role = m.role else: raise ValueError(f"Got unsupported message type: {m}") message = f"{role}: {m.content}" if isinstance(m, AIMessage) and "function_call" in m.additional_kwargs: message += f"{m.additional_kwargs['function_call']}" string_messages.append(message) return "\n".join(string_messages) class BaseMessage(Serializable): """The base abstract Message class. Messages are the inputs and outputs of ChatModels. """ content: str """The string contents of the message.""" additional_kwargs: dict = Field(default_factory=dict) """Any additional information.""" @property @abstractmethod def type(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
"""Type of the Message, used for serialization.""" @property def lc_serializable(self) -> bool: """Whether this class is LangChain serializable.""" return True class BaseMessageChunk(BaseMessage): def _merge_kwargs_dict( self, left: Dict[str, Any], right: Dict[str, Any] ) -> Dict[str, Any]: """Merge additional_kwargs from another BaseMessageChunk into this one.""" merged = left.copy() for k, v in right.items(): if k not in merged: merged[k] = v elif type(merged[k]) != type(v): raise ValueError( f'additional_kwargs["{k}"] already exists in this message,' " but with a different type." ) elif isinstance(merged[k], str): merged[k] += v elif isinstance(merged[k], dict): merged[k] = self._merge_kwargs_dict(merged[k], v) else: raise ValueError( f"Additional kwargs key {k} already exists in this message." ) return merged def __add__(self, other: Any) -> BaseMessageChunk:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
if isinstance(other, BaseMessageChunk): return self.__class__( content=self.content + other.content, additional_kwargs=self._merge_kwargs_dict( self.additional_kwargs, other.additional_kwargs ), ) else: raise TypeError( 'unsupported operand type(s) for +: "' f"{self.__class__.__name__}" f'" and "{other.__class__.__name__}"' ) class HumanMessage(BaseMessage): """A Message from a human.""" example: bool = False """Whether this Message is being passed in to the model as part of an example conversation. """ @property def type(self) -> str: """Type of the message, used for serialization.""" return "human" class HumanMessageChunk(HumanMessage, BaseMessageChunk):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
pass class AIMessage(BaseMessage): """A Message from an AI.""" example: bool = False """Whether this Message is being passed in to the model as part of an example conversation. """ @property def type(self) -> str: """Type of the message, used for serialization.""" return "ai" class AIMessageChunk(AIMessage, BaseMessageChunk): pass class SystemMessage(BaseMessage): """A Message for priming AI behavior, usually passed in as the first of a sequence of input messages. """ @property def type(self) -> str: """Type of the message, used for serialization.""" return "system" class SystemMessageChunk(SystemMessage, BaseMessageChunk):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
pass class FunctionMessage(BaseMessage): """A Message for passing the result of executing a function back to a model.""" name: str """The name of the function that was executed.""" @property def type(self) -> str: """Type of the message, used for serialization.""" return "function" class FunctionMessageChunk(FunctionMessage, BaseMessageChunk): pass class ChatMessage(BaseMessage): """A Message that can be assigned an arbitrary speaker (i.e. role).""" role: str """The speaker / role of the Message.""" @property def type(self) -> str: """Type of the message, used for serialization.""" return "chat" class ChatMessageChunk(ChatMessage, BaseMessageChunk): pass def _message_to_dict(message: BaseMessage) -> dict: return {"type": message.type, "data": message.dict()} def messages_to_dict(messages: Sequence[BaseMessage]) -> List[dict]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,472
unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage'
### System Info Langchain version: 0.0.247 python version: 3.11.0 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction You can reproduce this issue according following link: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining ``` from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate from langchain.schema import HumanMessage, AIMessage, SystemMessage prompt = SystemMessage(content="You are a nice pirate") new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}" ) ``` prompy + HumanMessage(content="hi") will generate this issue ### Expected behavior operand + for 'SystemMessage' and 'HumanMessage' should be support
https://github.com/langchain-ai/langchain/issues/8472
https://github.com/langchain-ai/langchain/pull/8489
f31047a3941cd389a9b8c01446b097e3bfbb1235
1ec0b1837971bc58c54645c4ca515dc201788a82
"2023-07-30T02:14:01Z"
python
"2023-08-02T14:51:44Z"
libs/langchain/langchain/schema/messages.py
"""Convert a sequence of Messages to a list of dictionaries. Args: messages: Sequence of messages (as BaseMessages) to convert. Returns: List of messages as dicts. """ return [_message_to_dict(m) for m in messages] def _message_from_dict(message: dict) -> BaseMessage: _type = message["type"] if _type == "human": return HumanMessage(**message["data"]) elif _type == "ai": return AIMessage(**message["data"]) elif _type == "system": return SystemMessage(**message["data"]) elif _type == "chat": return ChatMessage(**message["data"]) elif _type == "function": return FunctionMessage(**message["data"]) else: raise ValueError(f"Got unexpected message type: {_type}") def messages_from_dict(messages: List[dict]) -> List[BaseMessage]: """Convert a sequence of messages from dicts to Message objects. Args: messages: Sequence of messages (as dicts) to convert. Returns: List of messages (BaseMessages). """ return [_message_from_dict(m) for m in messages]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,650
[AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value
### System Info Hello, during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type. It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'. Why is it so? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answering_llm=AzureChatOpenAI( deployment_name=ANSWERING_MODEL_CONFIG.model_name, model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo" openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS openai_api_key=auth_token, temperature=ANSWERING_MODEL_CONFIG.temperature, max_tokens=ANSWERING_MODEL_CONFIG.max_tokens ) ### Expected behavior We expect the wrapper to take the value of the environmental variable correctly.
https://github.com/langchain-ai/langchain/issues/6650
https://github.com/langchain-ai/langchain/pull/8622
29f51055e8f7d060e6d3a5480591bef76652edae
e68a1d73d0c84503702a2bf66b52d7ae2336eb67
"2023-06-23T14:09:47Z"
python
"2023-08-04T03:21:41Z"
libs/langchain/langchain/chat_models/azure_openai.py
"""Azure OpenAI chat wrapper.""" from __future__ import annotations import logging from typing import Any, Dict, Mapping from pydantic import root_validator from langchain.chat_models.openai import ChatOpenAI from langchain.schema import ChatResult from langchain.utils import get_from_dict_or_env logger = logging.getLogger(__name__) class AzureChatOpenAI(ChatOpenAI):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,650
[AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value
### System Info Hello, during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type. It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'. Why is it so? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answering_llm=AzureChatOpenAI( deployment_name=ANSWERING_MODEL_CONFIG.model_name, model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo" openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS openai_api_key=auth_token, temperature=ANSWERING_MODEL_CONFIG.temperature, max_tokens=ANSWERING_MODEL_CONFIG.max_tokens ) ### Expected behavior We expect the wrapper to take the value of the environmental variable correctly.
https://github.com/langchain-ai/langchain/issues/6650
https://github.com/langchain-ai/langchain/pull/8622
29f51055e8f7d060e6d3a5480591bef76652edae
e68a1d73d0c84503702a2bf66b52d7ae2336eb67
"2023-06-23T14:09:47Z"
python
"2023-08-04T03:21:41Z"
libs/langchain/langchain/chat_models/azure_openai.py
"""Wrapper around Azure OpenAI Chat Completion API. To use this class you must have a deployed model on Azure OpenAI. Use `deployment_name` in the constructor to refer to the "Model deployment name" in the Azure portal. In addition, you should have the ``openai`` python package installed, and the following environment variables set or passed in constructor in lower case: - ``OPENAI_API_TYPE`` (default: ``azure``) - ``OPENAI_API_KEY`` - ``OPENAI_API_BASE`` - ``OPENAI_API_VERSION`` - ``OPENAI_PROXY`` For example, if you have `gpt-35-turbo` deployed, with the deployment name `35-turbo-dev`, the constructor should look like: .. code-block:: python AzureChatOpenAI( deployment_name="35-turbo-dev", openai_api_version="2023-05-15", ) Be aware the API version may change. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. """ deployment_name: str = "" openai_api_type: str = "azure" openai_api_base: str = "" openai_api_version: str = ""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,650
[AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value
### System Info Hello, during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type. It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'. Why is it so? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answering_llm=AzureChatOpenAI( deployment_name=ANSWERING_MODEL_CONFIG.model_name, model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo" openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS openai_api_key=auth_token, temperature=ANSWERING_MODEL_CONFIG.temperature, max_tokens=ANSWERING_MODEL_CONFIG.max_tokens ) ### Expected behavior We expect the wrapper to take the value of the environmental variable correctly.
https://github.com/langchain-ai/langchain/issues/6650
https://github.com/langchain-ai/langchain/pull/8622
29f51055e8f7d060e6d3a5480591bef76652edae
e68a1d73d0c84503702a2bf66b52d7ae2336eb67
"2023-06-23T14:09:47Z"
python
"2023-08-04T03:21:41Z"
libs/langchain/langchain/chat_models/azure_openai.py
openai_api_key: str = "" openai_organization: str = "" openai_proxy: str = "" @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" values["openai_api_key"] = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY", ) values["openai_api_base"] = get_from_dict_or_env( values, "openai_api_base", "OPENAI_API_BASE", ) values["openai_api_version"] = get_from_dict_or_env( values, "openai_api_version", "OPENAI_API_VERSION", ) values["openai_api_type"] = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", ) values["openai_organization"] = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,650
[AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value
### System Info Hello, during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type. It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'. Why is it so? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answering_llm=AzureChatOpenAI( deployment_name=ANSWERING_MODEL_CONFIG.model_name, model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo" openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS openai_api_key=auth_token, temperature=ANSWERING_MODEL_CONFIG.temperature, max_tokens=ANSWERING_MODEL_CONFIG.max_tokens ) ### Expected behavior We expect the wrapper to take the value of the environmental variable correctly.
https://github.com/langchain-ai/langchain/issues/6650
https://github.com/langchain-ai/langchain/pull/8622
29f51055e8f7d060e6d3a5480591bef76652edae
e68a1d73d0c84503702a2bf66b52d7ae2336eb67
"2023-06-23T14:09:47Z"
python
"2023-08-04T03:21:41Z"
libs/langchain/langchain/chat_models/azure_openai.py
default="", ) values["openai_proxy"] = get_from_dict_or_env( values, "openai_proxy", "OPENAI_PROXY", default="", ) try: import openai except ImportError: raise ImportError( "Could not import openai python package. " "Please install it with `pip install openai`." ) try: values["client"] = openai.ChatCompletion except AttributeError: raise ValueError( "`openai` has no `ChatCompletion` attribute, this is likely " "due to an old version of the openai package. Try upgrading it " "with `pip install --upgrade openai`." ) if values["n"] < 1: raise ValueError("n must be at least 1.") if values["n"] > 1 and values["streaming"]: raise ValueError("n must be 1 when streaming.") return values @property def _default_params(self) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
6,650
[AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value
### System Info Hello, during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type. It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'. Why is it so? ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answering_llm=AzureChatOpenAI( deployment_name=ANSWERING_MODEL_CONFIG.model_name, model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo" openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS openai_api_key=auth_token, temperature=ANSWERING_MODEL_CONFIG.temperature, max_tokens=ANSWERING_MODEL_CONFIG.max_tokens ) ### Expected behavior We expect the wrapper to take the value of the environmental variable correctly.
https://github.com/langchain-ai/langchain/issues/6650
https://github.com/langchain-ai/langchain/pull/8622
29f51055e8f7d060e6d3a5480591bef76652edae
e68a1d73d0c84503702a2bf66b52d7ae2336eb67
"2023-06-23T14:09:47Z"
python
"2023-08-04T03:21:41Z"
libs/langchain/langchain/chat_models/azure_openai.py
"""Get the default parameters for calling OpenAI API.""" return { **super()._default_params, "engine": self.deployment_name, } @property def _identifying_params(self) -> Dict[str, Any]: """Get the identifying parameters.""" return {**self._default_params} @property def _client_params(self) -> Dict[str, Any]: """Get the config params used for the openai client.""" return { **super()._client_params, "api_type": self.openai_api_type, "api_version": self.openai_api_version, } @property def _llm_type(self) -> str: return "azure-openai-chat" def _create_chat_result(self, response: Mapping[str, Any]) -> ChatResult: for res in response["choices"]: if res.get("finish_reason", None) == "content_filter": raise ValueError( "Azure has not provided the response due to a content" " filter being triggered" ) return super()._create_chat_result(response)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,786
RetrievalQA.from_chain_type: callbacks are not called for all nested chains
### System Info langchain: 0.0.252 python: 3.10.12 @agola11 ### Who can help? @agola11 please take a look, ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them 2. Create a retrival chain and add this LogHandler 3. Add this LogHandler to llm as well 4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain ### Expected behavior All the nested chains should have callbacks defined.
https://github.com/langchain-ai/langchain/issues/8786
https://github.com/langchain-ai/langchain/pull/8787
5f1aab548731b53ebab00dd745a35ec7da52bf1c
797c9e92c82f8e843b321ec2167bb1678ced03cf
"2023-08-05T06:43:10Z"
python
"2023-08-06T22:11:45Z"
libs/langchain/langchain/chains/question_answering/__init__.py
"""Load question answering chains.""" from typing import Any, Mapping, Optional, Protocol from langchain.callbacks.base import BaseCallbackManager from langchain.callbacks.manager import Callbacks from langchain.chains import ReduceDocumentsChain from langchain.chains.combine_documents.base import BaseCombineDocumentsChain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain from langchain.chains.combine_documents.refine import RefineDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.llm import LLMChain from langchain.chains.question_answering import ( map_reduce_prompt, refine_prompts, stuff_prompt, ) from langchain.chains.question_answering.map_rerank_prompt import ( PROMPT as MAP_RERANK_PROMPT, ) from langchain.schema.language_model import BaseLanguageModel from langchain.schema.prompt_template import BasePromptTemplate class LoadingCallable(Protocol): """Interface for loading the combine documents chain.""" def __call__( self, llm: BaseLanguageModel, **kwargs: Any ) -> BaseCombineDocumentsChain: """Callable to load the combine documents chain.""" def _load_map_rerank_chain(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,786
RetrievalQA.from_chain_type: callbacks are not called for all nested chains
### System Info langchain: 0.0.252 python: 3.10.12 @agola11 ### Who can help? @agola11 please take a look, ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them 2. Create a retrival chain and add this LogHandler 3. Add this LogHandler to llm as well 4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain ### Expected behavior All the nested chains should have callbacks defined.
https://github.com/langchain-ai/langchain/issues/8786
https://github.com/langchain-ai/langchain/pull/8787
5f1aab548731b53ebab00dd745a35ec7da52bf1c
797c9e92c82f8e843b321ec2167bb1678ced03cf
"2023-08-05T06:43:10Z"
python
"2023-08-06T22:11:45Z"
libs/langchain/langchain/chains/question_answering/__init__.py
llm: BaseLanguageModel, prompt: BasePromptTemplate = MAP_RERANK_PROMPT, verbose: bool = False, document_variable_name: str = "context", rank_key: str = "score", answer_key: str = "answer", callback_manager: Optional[BaseCallbackManager] = None, callbacks: Callbacks = None, **kwargs: Any, ) -> MapRerankDocumentsChain: llm_chain = LLMChain( llm=llm, prompt=prompt, verbose=verbose, callback_manager=callback_manager, callbacks=callbacks, ) return MapRerankDocumentsChain( llm_chain=llm_chain, rank_key=rank_key, answer_key=answer_key, document_variable_name=document_variable_name, verbose=verbose, callback_manager=callback_manager, **kwargs, ) def _load_stuff_chain(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,786
RetrievalQA.from_chain_type: callbacks are not called for all nested chains
### System Info langchain: 0.0.252 python: 3.10.12 @agola11 ### Who can help? @agola11 please take a look, ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them 2. Create a retrival chain and add this LogHandler 3. Add this LogHandler to llm as well 4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain ### Expected behavior All the nested chains should have callbacks defined.
https://github.com/langchain-ai/langchain/issues/8786
https://github.com/langchain-ai/langchain/pull/8787
5f1aab548731b53ebab00dd745a35ec7da52bf1c
797c9e92c82f8e843b321ec2167bb1678ced03cf
"2023-08-05T06:43:10Z"
python
"2023-08-06T22:11:45Z"
libs/langchain/langchain/chains/question_answering/__init__.py
llm: BaseLanguageModel, prompt: Optional[BasePromptTemplate] = None, document_variable_name: str = "context", verbose: Optional[bool] = None, callback_manager: Optional[BaseCallbackManager] = None, callbacks: Callbacks = None, **kwargs: Any, ) -> StuffDocumentsChain: _prompt = prompt or stuff_prompt.PROMPT_SELECTOR.get_prompt(llm) llm_chain = LLMChain( llm=llm, prompt=_prompt, verbose=verbose, callback_manager=callback_manager, callbacks=callbacks, ) return StuffDocumentsChain( llm_chain=llm_chain, document_variable_name=document_variable_name, verbose=verbose, callback_manager=callback_manager, **kwargs, ) def _load_map_reduce_chain(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,786
RetrievalQA.from_chain_type: callbacks are not called for all nested chains
### System Info langchain: 0.0.252 python: 3.10.12 @agola11 ### Who can help? @agola11 please take a look, ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them 2. Create a retrival chain and add this LogHandler 3. Add this LogHandler to llm as well 4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain ### Expected behavior All the nested chains should have callbacks defined.
https://github.com/langchain-ai/langchain/issues/8786
https://github.com/langchain-ai/langchain/pull/8787
5f1aab548731b53ebab00dd745a35ec7da52bf1c
797c9e92c82f8e843b321ec2167bb1678ced03cf
"2023-08-05T06:43:10Z"
python
"2023-08-06T22:11:45Z"
libs/langchain/langchain/chains/question_answering/__init__.py
llm: BaseLanguageModel, question_prompt: Optional[BasePromptTemplate] = None, combine_prompt: Optional[BasePromptTemplate] = None, combine_document_variable_name: str = "summaries", map_reduce_document_variable_name: str = "context", collapse_prompt: Optional[BasePromptTemplate] = None, reduce_llm: Optional[BaseLanguageModel] = None, collapse_llm: Optional[BaseLanguageModel] = None, verbose: Optional[bool] = None, callback_manager: Optional[BaseCallbackManager] = None, callbacks: Callbacks = None, token_max: int = 3000, **kwargs: Any, ) -> MapReduceDocumentsChain: _question_prompt = ( question_prompt or map_reduce_prompt.QUESTION_PROMPT_SELECTOR.get_prompt(llm) ) _combine_prompt = ( combine_prompt or map_reduce_prompt.COMBINE_PROMPT_SELECTOR.get_prompt(llm) )