show episodes
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
  continue reading
 
Loading …
show series
 
Meta just released Llama 3.1 405B–according to them, it’s “the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.” Will the latest Llama herd ignite new applications and modeling paradigms like synthetic data gene…
  continue reading
 
Chaining language model (LM) calls as composable modules is fueling a new way of programming, but ensuring LMs adhere to important constraints requires heuristic “prompt engineering.” The paper this week introduces LM Assertions, a programming construct for expressing computational constraints that LMs should satisfy. The researchers integrated the…
  continue reading
 
Where adapting LLMs to specialized domains is essential (e.g., recent news, enterprise private documents), we discuss a paper that asks how we adapt pre-trained LLMs for RAG in specialized domains. SallyAnn DeLucia is joined by Sai Kolasani, researcher at UC Berkeley’s RISE Lab (and Arize AI Intern), to talk about his work on RAFT: Adapting Languag…
  continue reading
 
It’s been an exciting couple weeks for GenAI! Join us as we discuss the latest research from OpenAI and Anthropic. We’re excited to chat about this significant step forward in understanding how LLMs work and the implications it has for deeper understanding of the neural activity of language models. We take a closer look at some recent research from…
  continue reading
 
Foundational models like GPT-4, the large language model behind ChatGPT, have hoovered up content from publications like The New York Times and social media sites like Reddit and OpenAI, and it faces several lawsuits because of this. John Thompson, global head of artificial intelligence at EY and author of the book Data for All, has set up what is …
  continue reading
 
We break down the paper--Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models' Alignment. Ensuring alignment (aka: making models behave in accordance with human intentions) has become a critical task before deploying LLMs in real-world applications. However, a major challenge faced by practitioners is the lack of clear guid…
  continue reading
 
Proof of identity is critical for many things, including being able to open a bank account, get a job, or obtain health care. Yet proving one’s identity is getting harder in a world of frequent data breaches. We asked Mariana Dahan, founder of the World Identity Network and chair of the Universal ID Council, what she thinks will solve this problem.…
  continue reading
 
Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators often inherit the problems of the LLMs they evaluate, requiring further human validation. This week’s paper explores EvalGen, a m…
  continue reading
 
Custodia Founder and CEO Caitlin Long says the Federal Reserve has rewritten the rules around accessing the government's payments system. The central bank and a federal court judge disagree. Editor’s note: This conversation was recorded on April 17. On April 26, Custodia Bank filed a notice of appeal, signaling that it will challenge the district c…
  continue reading
 
This week we explore ReAct, an approach that enhances the reasoning and decision-making capabilities of LLMs by combining step-by-step reasoning with the ability to take actions and gather information from external sources in a unified framework. To learn more about ML observability, join the Arize AI Slack community or get the latest on our Linked…
  continue reading
 
This week, we’ve covering Amazon’s time series model: Chronos. Developing accurate machine-learning-based forecasting models has traditionally required substantial dataset-specific tuning and model customization. Chronos however, is built on a language model architecture and trained with billions of tokenized time series observations, enabling it t…
  continue reading
 
This week we dive into the latest buzz in the AI world – the arrival of Claude 3. Claude 3 is the newest family of models in the LLM space, and Opus Claude 3 ( Anthropic's "most intelligent" Claude model ) challenges the likes of GPT-4. The Claude 3 family of models, according to Anthropic "sets new industry benchmarks," and includes "three state-o…
  continue reading
 
We’re exploring Reinforcement Learning in the Era of LLMs this week with Claire Longo, Arize’s Head of Customer Success. Recent advancements in Large Language Models (LLMs) have garnered wide attention and led to successful products such as ChatGPT and GPT-4. Their proficiency in adhering to instructions and delivering harmless, helpful, and honest…
  continue reading
 
This week, we discuss the implications of Text-to-Video Generation and speculate as to the possibilities (and limitations) of this incredible technology with some hot takes. Dat Ngo, ML Solutions Engineer at Arize, is joined by community member and AI Engineer Vibhu Sapra to review OpenAI’s technical report on their Text-To-Video Generation Model: …
  continue reading
 
This week, we’re discussing "RAG vs Fine-Tuning: Pipelines, Tradeoff, and a Case Study on Agriculture." This paper explores a pipeline for fine-tuning and RAG, and presents the tradeoffs of both for multiple popular LLMs, including Llama2-13B, GPT-3.5, and GPT-4. The authors propose a pipeline that consists of multiple stages, including extracting …
  continue reading
 
We dive into Phi-2 and some of the major differences and use cases for a small language model (SLM) versus an LLM. With only 2.7 billion parameters, Phi-2 surpasses the performance of Mistral and Llama-2 models at 7B and 13B parameters on various aggregated benchmarks. Notably, it achieves better performance compared to 25x larger Llama-2-70B model…
  continue reading
 
We discuss HyDE: a thrilling zero-shot learning technique that combines GPT-3’s language understanding with contrastive text encoders. HyDE revolutionizes information retrieval and grounding in real-world data by generating hypothetical documents from queries and retrieving similar real-world documents. It outperforms traditional unsupervised retri…
  continue reading
 
The fintech revolution has been more successful at working with banks than at trying to replace them, points out Gene Ludwig, former Comptroller of the Currency, chair of the Ludwig Institute for Shared Economic Prosperity, and co-founder of Canapi Ventures. Those with “must have” products will fare far better in 2024 than those with “nice to have”…
  continue reading
 
For the last paper read of the year, Arize CPO & Co-Founder, Aparna Dhinakaran, is joined by a Dat Ngo (ML Solutions Architect) and Aman Khan (Product Manager) for an exploration of the new kids on the block: Gemini and Mixtral-8x7B. There's a lot to cover, so this week's paper read is Part I in a series about Mixtral and Gemini. In Part I, we prov…
  continue reading
 
We’re thrilled to be joined by Shuaichen Chang, LLM researcher and the author of this week’s paper to discuss his findings. Shuaichen’s research investigates the impact of prompt constructions on the performance of large language models (LLMs) in the text-to-SQL task, particularly focusing on zero-shot, single-domain, and cross-domain settings. Shu…
  continue reading
 
Over the past year, the national bank regulators’ oversight of Silicon Valley Bank, Signature Bank, Silvergate Capital and other banks that failed has been criticized. Reports of a toxic workplace at the FDIC have come to light. And the OCC hired a Deputy Comptroller and overseer of fintech who had easily discoverable falsehoods on his resume. Mich…
  continue reading
 
For this paper read, we’re joined by Samuel Marks, Postdoctoral Research Associate at Northeastern University, to discuss his paper, “The Geometry of Truth: Emergent Linear Structure in LLM Representation of True/False Datasets.” Samuel and his team curated high-quality datasets of true/false statements and used them to study in detail the structur…
  continue reading
 
In this paper read, we discuss “Towards Monosemanticity: Decomposing Language Models Into Understandable Components,” a paper from Anthropic that addresses the challenge of understanding the inner workings of neural networks, drawing parallels with the complexity of human brain function. It explores the concept of “features,” (patterns of neuron ac…
  continue reading
 
Community banks sometimes feel that they lack the budget and staff to compete with larger banks and fintechs on things like mobile and online banking, virtual assistants and most recently generative AI. Jim Perry, senior strategist at Market Insights, suggests steps they can and should take to stay relevant technology wise.…
  continue reading
 
The case is not really about cryptocurrency but about fraud, points out Seoyoung Kim, department chair and associate professor of finance and business analytics at the Leavey School of Business at Santa Clara University. But regulators and lawmakers are watching and the outcome of the trial will have repercussions throughout finance.…
  continue reading
 
We discuss RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. While researchers have successfully applied LLMs such as ChatGPT to reranking in an information retrieval context, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approac…
  continue reading
 
Join Arize Co-Founder & CEO Jason Lopatecki, and ML Solutions Engineer, Sally-Ann DeLucia, as they discuss “Explaining Grokking Through Circuit Efficiency." This paper explores novel predictions about grokking, providing significant evidence in favor of its explanation. Most strikingly, the research conducted in this paper demonstrates two novel an…
  continue reading
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. In this episode, we discuss the paper, “Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior.” This episode is …
  continue reading
 
Bell, an attorney, founded Ready Life to help reduce the racial wealth and homeownership gaps by showing lenders that credit-score-less consumers have been responsible with their money, based on their daily transactions. Now he and Bernice King, Martin Luther King Jr.'s daughter, are buying a community bank just outside of Salt Lake City.…
  continue reading
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. In this paper reading, we explore the paper ‘Skeleton-of-Thought’ (SoT) approach, aimed at reducing large language model latency while enhancing answer…
  continue reading
 
Financial institutions can trade billions of dollars per day and handle sensitive data for millions of customers across the globe, which make them enormously attractive targets for cybercriminals. Their defenses must be top notch and ever evolving to keep up with this threat, but FIs' infrastructures are usually vast and complex, straddling old, le…
  continue reading
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Aparna Dhinakaran ( Chief Product Officer, Arize AI) and Michael Schiff (Chief Technology Officer, Arize AI), as they discuss th…
  continue reading
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Sally-Ann DeLucia and Amber Roberts, as they discuss the paper "Lost in the Middle: How Language Models Use Long Contexts." This…
  continue reading
 
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Hosted by AI Pub creator Brian Burns and Arize AI founders Jason Lopatecki and Aparna Dhinakaran, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. In this episode, we talk about Orca. Recent research …
  continue reading
 
Loading …

Quick Reference Guide