New top story on Hacker News: Launch HN: Panora (YC S24) – Data Integration API for LLMs
Launch HN: Panora (YC S24) – Data Integration API for LLMs
7 by nael_ob | 0 comments on Hacker News.
Hey HN! We're Nael and Rachid, and we're building Panora ( https://ift.tt/DM7eZKC ), an open-source API that connects various data sources to LLMs, from 3rd party integrations to embeddings and chunking generation. Here's a demo: https://www.youtube.com/watch?v=45QaN8mzAfg , and you can check our docs here: https://ift.tt/ucGYUst Our GitHub repo is at https://ift.tt/DM7eZKC . Building integrations by hand is tedious and time-consuming. You must adapt to API documentation quirks, manage request retries, OAuth/API key authorization, refresh tokens, rate limits, and data sync freshness. Moreover, you have to keep up with the constant rise of embedding models and chunking capabilities. On the other hand, with the rise of AI-powered apps, you have to handle embedding and chunking of all the unstructured data. The dominant player in this space is Merge.dev, but it has several drawbacks: 1. It's a black box for most developers, lacking transparency on data handling. 2. Strong vendor lock-in: once an end-user connects their software, it's challenging to access authorization tokens if you want to perform requests on their behalf after leaving Merge. 3. Long time-to-deploy for the long tail of integrations, leading to lost opportunities as integrations become the backbone of LLM-based applications. 4. Unrealistic prices per connection (action of one end-user connecting their tool). 5. Not positioned to serve LLM-based products that need RAG-ready data to power their use cases. That's how Panora was born. We set out to build a solution that addresses these pain points head-on, creating something that is both developer-friendly and open-source. Our goal was to simplify the complex world of integrations and data preparation for LLMs, allowing developers to focus on building great products rather than wrestling with integration headaches. Panora is 100% open-source under the Apache 2.0 license and you can either use our cloud version or self-host the product. We provide two ways for your end-users to connect their software seamlessly. 1. A frontend SDK (React) where you can embed the integrations catalog within your app. 2. A magic link that you can share with anyone allowing them to connect their software. You can either use your own OAuth clients or our managed ones. You receive a connection token per user and per provider connected, which you must use to retrieve/insert data using our universal API. We have different categories of software such as CRMs or File storage. Every category is divided into entities (e.g: File Storage has File, Folder, Drive, Group & User) following a standard data model. You even have access to remote data (non-transformed data from the provider) within each response, so you can build custom & complex integrations on your end. If the remote data isn't enough beyond the standard data model, you can create custom fields either via API or our dashboard to map your remote fields to our model. We're more than just integrations—we provide ready data for your RAG applications with auto-generation of embeddings and chunks for all your synced documents. You have the option to select your own vector database and embedding model in the dashboard. We then sync your documents and store the chunks/embeddings to the specified vector DB. We make sure to maintain up-to-date data that we send through webhooks, and you can set custom sync frequency (1hr, once a day, etc.) depending on your use case. Developers use our API to access fragmented data across various software such as File storage systems (Google Drive, OneDrive, SharePoint) and retrieve the embeddings of their documents using a single API. Our backend SDK is available for Python, TypeScript, Ruby, and Go. Your honest feedback, suggestions, and wishes would be very helpful. We'd love to hear about your integration stories, challenges you've faced with data integration for LLMs, and any thoughts on our approach. Thanks, HN!
September 23, 2024 at 11:43PM nael_ob 7 https://ift.tt/te78qEw Launch HN: Panora (YC S24) – Data Integration API for LLMs 0 Hey HN! We're Nael and Rachid, and we're building Panora ( https://ift.tt/DM7eZKC ), an open-source API that connects various data sources to LLMs, from 3rd party integrations to embeddings and chunking generation. Here's a demo: https://www.youtube.com/watch?v=45QaN8mzAfg , and you can check our docs here: https://ift.tt/ucGYUst Our GitHub repo is at https://ift.tt/DM7eZKC . Building integrations by hand is tedious and time-consuming. You must adapt to API documentation quirks, manage request retries, OAuth/API key authorization, refresh tokens, rate limits, and data sync freshness. Moreover, you have to keep up with the constant rise of embedding models and chunking capabilities. On the other hand, with the rise of AI-powered apps, you have to handle embedding and chunking of all the unstructured data. The dominant player in this space is Merge.dev, but it has several drawbacks: 1. It's a black box for most developers, lacking transparency on data handling. 2. Strong vendor lock-in: once an end-user connects their software, it's challenging to access authorization tokens if you want to perform requests on their behalf after leaving Merge. 3. Long time-to-deploy for the long tail of integrations, leading to lost opportunities as integrations become the backbone of LLM-based applications. 4. Unrealistic prices per connection (action of one end-user connecting their tool). 5. Not positioned to serve LLM-based products that need RAG-ready data to power their use cases. That's how Panora was born. We set out to build a solution that addresses these pain points head-on, creating something that is both developer-friendly and open-source. Our goal was to simplify the complex world of integrations and data preparation for LLMs, allowing developers to focus on building great products rather than wrestling with integration headaches. Panora is 100% open-source under the Apache 2.0 license and you can either use our cloud version or self-host the product. We provide two ways for your end-users to connect their software seamlessly. 1. A frontend SDK (React) where you can embed the integrations catalog within your app. 2. A magic link that you can share with anyone allowing them to connect their software. You can either use your own OAuth clients or our managed ones. You receive a connection token per user and per provider connected, which you must use to retrieve/insert data using our universal API. We have different categories of software such as CRMs or File storage. Every category is divided into entities (e.g: File Storage has File, Folder, Drive, Group & User) following a standard data model. You even have access to remote data (non-transformed data from the provider) within each response, so you can build custom & complex integrations on your end. If the remote data isn't enough beyond the standard data model, you can create custom fields either via API or our dashboard to map your remote fields to our model. We're more than just integrations—we provide ready data for your RAG applications with auto-generation of embeddings and chunks for all your synced documents. You have the option to select your own vector database and embedding model in the dashboard. We then sync your documents and store the chunks/embeddings to the specified vector DB. We make sure to maintain up-to-date data that we send through webhooks, and you can set custom sync frequency (1hr, once a day, etc.) depending on your use case. Developers use our API to access fragmented data across various software such as File storage systems (Google Drive, OneDrive, SharePoint) and retrieve the embeddings of their documents using a single API. Our backend SDK is available for Python, TypeScript, Ruby, and Go. Your honest feedback, suggestions, and wishes would be very helpful. We'd love to hear about your integration stories, challenges you've faced with data integration for LLMs, and any thoughts on our approach. Thanks, HN! https://ift.tt/DM7eZKC
7 by nael_ob | 0 comments on Hacker News.
Hey HN! We're Nael and Rachid, and we're building Panora ( https://ift.tt/DM7eZKC ), an open-source API that connects various data sources to LLMs, from 3rd party integrations to embeddings and chunking generation. Here's a demo: https://www.youtube.com/watch?v=45QaN8mzAfg , and you can check our docs here: https://ift.tt/ucGYUst Our GitHub repo is at https://ift.tt/DM7eZKC . Building integrations by hand is tedious and time-consuming. You must adapt to API documentation quirks, manage request retries, OAuth/API key authorization, refresh tokens, rate limits, and data sync freshness. Moreover, you have to keep up with the constant rise of embedding models and chunking capabilities. On the other hand, with the rise of AI-powered apps, you have to handle embedding and chunking of all the unstructured data. The dominant player in this space is Merge.dev, but it has several drawbacks: 1. It's a black box for most developers, lacking transparency on data handling. 2. Strong vendor lock-in: once an end-user connects their software, it's challenging to access authorization tokens if you want to perform requests on their behalf after leaving Merge. 3. Long time-to-deploy for the long tail of integrations, leading to lost opportunities as integrations become the backbone of LLM-based applications. 4. Unrealistic prices per connection (action of one end-user connecting their tool). 5. Not positioned to serve LLM-based products that need RAG-ready data to power their use cases. That's how Panora was born. We set out to build a solution that addresses these pain points head-on, creating something that is both developer-friendly and open-source. Our goal was to simplify the complex world of integrations and data preparation for LLMs, allowing developers to focus on building great products rather than wrestling with integration headaches. Panora is 100% open-source under the Apache 2.0 license and you can either use our cloud version or self-host the product. We provide two ways for your end-users to connect their software seamlessly. 1. A frontend SDK (React) where you can embed the integrations catalog within your app. 2. A magic link that you can share with anyone allowing them to connect their software. You can either use your own OAuth clients or our managed ones. You receive a connection token per user and per provider connected, which you must use to retrieve/insert data using our universal API. We have different categories of software such as CRMs or File storage. Every category is divided into entities (e.g: File Storage has File, Folder, Drive, Group & User) following a standard data model. You even have access to remote data (non-transformed data from the provider) within each response, so you can build custom & complex integrations on your end. If the remote data isn't enough beyond the standard data model, you can create custom fields either via API or our dashboard to map your remote fields to our model. We're more than just integrations—we provide ready data for your RAG applications with auto-generation of embeddings and chunks for all your synced documents. You have the option to select your own vector database and embedding model in the dashboard. We then sync your documents and store the chunks/embeddings to the specified vector DB. We make sure to maintain up-to-date data that we send through webhooks, and you can set custom sync frequency (1hr, once a day, etc.) depending on your use case. Developers use our API to access fragmented data across various software such as File storage systems (Google Drive, OneDrive, SharePoint) and retrieve the embeddings of their documents using a single API. Our backend SDK is available for Python, TypeScript, Ruby, and Go. Your honest feedback, suggestions, and wishes would be very helpful. We'd love to hear about your integration stories, challenges you've faced with data integration for LLMs, and any thoughts on our approach. Thanks, HN!
September 23, 2024 at 11:43PM nael_ob 7 https://ift.tt/te78qEw Launch HN: Panora (YC S24) – Data Integration API for LLMs 0 Hey HN! We're Nael and Rachid, and we're building Panora ( https://ift.tt/DM7eZKC ), an open-source API that connects various data sources to LLMs, from 3rd party integrations to embeddings and chunking generation. Here's a demo: https://www.youtube.com/watch?v=45QaN8mzAfg , and you can check our docs here: https://ift.tt/ucGYUst Our GitHub repo is at https://ift.tt/DM7eZKC . Building integrations by hand is tedious and time-consuming. You must adapt to API documentation quirks, manage request retries, OAuth/API key authorization, refresh tokens, rate limits, and data sync freshness. Moreover, you have to keep up with the constant rise of embedding models and chunking capabilities. On the other hand, with the rise of AI-powered apps, you have to handle embedding and chunking of all the unstructured data. The dominant player in this space is Merge.dev, but it has several drawbacks: 1. It's a black box for most developers, lacking transparency on data handling. 2. Strong vendor lock-in: once an end-user connects their software, it's challenging to access authorization tokens if you want to perform requests on their behalf after leaving Merge. 3. Long time-to-deploy for the long tail of integrations, leading to lost opportunities as integrations become the backbone of LLM-based applications. 4. Unrealistic prices per connection (action of one end-user connecting their tool). 5. Not positioned to serve LLM-based products that need RAG-ready data to power their use cases. That's how Panora was born. We set out to build a solution that addresses these pain points head-on, creating something that is both developer-friendly and open-source. Our goal was to simplify the complex world of integrations and data preparation for LLMs, allowing developers to focus on building great products rather than wrestling with integration headaches. Panora is 100% open-source under the Apache 2.0 license and you can either use our cloud version or self-host the product. We provide two ways for your end-users to connect their software seamlessly. 1. A frontend SDK (React) where you can embed the integrations catalog within your app. 2. A magic link that you can share with anyone allowing them to connect their software. You can either use your own OAuth clients or our managed ones. You receive a connection token per user and per provider connected, which you must use to retrieve/insert data using our universal API. We have different categories of software such as CRMs or File storage. Every category is divided into entities (e.g: File Storage has File, Folder, Drive, Group & User) following a standard data model. You even have access to remote data (non-transformed data from the provider) within each response, so you can build custom & complex integrations on your end. If the remote data isn't enough beyond the standard data model, you can create custom fields either via API or our dashboard to map your remote fields to our model. We're more than just integrations—we provide ready data for your RAG applications with auto-generation of embeddings and chunks for all your synced documents. You have the option to select your own vector database and embedding model in the dashboard. We then sync your documents and store the chunks/embeddings to the specified vector DB. We make sure to maintain up-to-date data that we send through webhooks, and you can set custom sync frequency (1hr, once a day, etc.) depending on your use case. Developers use our API to access fragmented data across various software such as File storage systems (Google Drive, OneDrive, SharePoint) and retrieve the embeddings of their documents using a single API. Our backend SDK is available for Python, TypeScript, Ruby, and Go. Your honest feedback, suggestions, and wishes would be very helpful. We'd love to hear about your integration stories, challenges you've faced with data integration for LLMs, and any thoughts on our approach. Thanks, HN! https://ift.tt/DM7eZKC
Nhận xét
Đăng nhận xét