diff --git a/docs/sphinx/source/examples/chat_with_your_pdfs_using_colbert_langchain_and_Vespa-cloud.ipynb b/docs/sphinx/source/examples/chat_with_your_pdfs_using_colbert_langchain_and_Vespa-cloud.ipynb index 4aa7ae75..1f8dc82a 100644 --- a/docs/sphinx/source/examples/chat_with_your_pdfs_using_colbert_langchain_and_Vespa-cloud.ipynb +++ b/docs/sphinx/source/examples/chat_with_your_pdfs_using_colbert_langchain_and_Vespa-cloud.ipynb @@ -490,7 +490,7 @@ "source": [ "### Processing PDFs with LangChain\n", "\n", - "[LangChain](https://python.langchain.com/) has a rich set of [document loaders](https://python.langchain.com/docs/how_to/#document-loaders) that can be used to load and process various file formats. In this notebook, we use the [PyPDFLoader](https://python.langchain.com/docs/how_to/document_loader_pdf/#using-pypdf).\n", + "[LangChain](https://python.langchain.com/) has a rich set of [document loaders](https://python.langchain.com/docs/how_to/#document-loaders) that can be used to load and process various file formats. In this notebook, we use the [PyPDFLoader](https://python.langchain.com/docs/how_to/document_loader_pdf/).\n", "\n", "We also want to split the extracted text into _contexts_ using a [text splitter](https://python.langchain.com/docs/how_to/#text-splitters). Most text embedding models have limited input lengths (typically less than 512 language model tokens, so splitting the text\n", "into multiple contexts that each fits into the context limit of the embedding model is a common strategy.\n", @@ -1222,4 +1222,4 @@ }, "nbformat": 4, "nbformat_minor": 5 -} +} \ No newline at end of file diff --git a/docs/sphinx/source/examples/turbocharge-rag-with-langchain-and-vespa-streaming-mode-cloud.ipynb b/docs/sphinx/source/examples/turbocharge-rag-with-langchain-and-vespa-streaming-mode-cloud.ipynb index 51f580af..40289be2 100644 --- a/docs/sphinx/source/examples/turbocharge-rag-with-langchain-and-vespa-streaming-mode-cloud.ipynb +++ b/docs/sphinx/source/examples/turbocharge-rag-with-langchain-and-vespa-streaming-mode-cloud.ipynb @@ -408,7 +408,7 @@ "source": [ "## Processing PDFs with LangChain\n", "\n", - "[LangChain](https://python.langchain.com/) has a rich set of [document loaders](https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/) that can be used to load and process various file formats. In this notebook, we use the [PyPDFLoader](https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/pdf#using-pypdf).\n", + "[LangChain](https://python.langchain.com/) has a rich set of [document loaders](https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/) that can be used to load and process various file formats. In this notebook, we use the [PyPDFLoader](https://python.langchain.com/v0.1/docs/modules/data_connection/document_loaders/pdf).\n", "\n", "We also want to split the extracted text into _chunks_ using a [text splitter](https://python.langchain.com/v0.1/docs/modules/data_connection/document_transformers/). Most text embedding models have limited input lengths (typically less than 512 language model tokens, so splitting the text\n", "into multiple chunks that fits into the context limit of the embedding model is a common strategy.\n",