Artwork

Content provided by SingleStore. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SingleStore or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Revolutionizing Language Models and Data Processing with LlamaIndex

40:43
 
Share
 

Manage episode 435150756 series 3594143
Content provided by SingleStore. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SingleStore or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry.

Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction.

Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction!

Key Quotes:

  • “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”
  • “We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”
  • “You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking state and then you complete execution, right? And then you get back a response.And so that actually is a roughly general interface that we have like a base abstraction for.”
  • “A lot of LLMs, more and more of them are supporting function calling nowadays.So under the hood within the LLM, the API gives you the ability to just specify a set of tools that the LLM API can decide to call tools for you. So it's actually just a really nice abstraction, instead of the user having to manually prompt the LLM to coerce it, a lot of these LLM providers just have the ability for you to specify functions under the hood and if you just do a while loop over that, that's basically an agent, right? Because you just do a while loop until that function calling process is done and that's basically, honestly, what the OpenAI Assistance agent is. And then if you go into some of the more recent agent papers you can start doing things beyond just the next step chain of thought into every stage instead of just reasoning about what you're going to do next, reason about like an entire map of what you're going to do, roll out like different scenarios get the value functions of each of them and then make the best decision And so you can get pretty complicated with the actual reasoning process that which then feeds into tool use and everything else.”

Timestamps

  • (1:25) Llamindex origins
  • (5:45) Building LLM Applications with Lama Index
  • (10:35) Finding patterns and fine-tuning in LLM usage
  • (18:50) Keeping LlamaIndex in the open-source community
  • (23:46) LlamaCloud comprehensive evaluation capabilities
  • (31:45) The future of the modern data stack
  • (40:10) Best practices when building a new application

Links

Connect with Jerry

Visit LlamIndex

Connect with Madhukar

Visit SingleStore

  continue reading

12 episodes

Artwork
iconShare
 
Manage episode 435150756 series 3594143
Content provided by SingleStore. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by SingleStore or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry.

Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction.

Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction!

Key Quotes:

  • “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”
  • “We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”
  • “You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking state and then you complete execution, right? And then you get back a response.And so that actually is a roughly general interface that we have like a base abstraction for.”
  • “A lot of LLMs, more and more of them are supporting function calling nowadays.So under the hood within the LLM, the API gives you the ability to just specify a set of tools that the LLM API can decide to call tools for you. So it's actually just a really nice abstraction, instead of the user having to manually prompt the LLM to coerce it, a lot of these LLM providers just have the ability for you to specify functions under the hood and if you just do a while loop over that, that's basically an agent, right? Because you just do a while loop until that function calling process is done and that's basically, honestly, what the OpenAI Assistance agent is. And then if you go into some of the more recent agent papers you can start doing things beyond just the next step chain of thought into every stage instead of just reasoning about what you're going to do next, reason about like an entire map of what you're going to do, roll out like different scenarios get the value functions of each of them and then make the best decision And so you can get pretty complicated with the actual reasoning process that which then feeds into tool use and everything else.”

Timestamps

  • (1:25) Llamindex origins
  • (5:45) Building LLM Applications with Lama Index
  • (10:35) Finding patterns and fine-tuning in LLM usage
  • (18:50) Keeping LlamaIndex in the open-source community
  • (23:46) LlamaCloud comprehensive evaluation capabilities
  • (31:45) The future of the modern data stack
  • (40:10) Best practices when building a new application

Links

Connect with Jerry

Visit LlamIndex

Connect with Madhukar

Visit SingleStore

  continue reading

12 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide