Artwork

Player FM - Internet Radio Done Right
Checked 2y ago
Added two years ago
Content provided by Arthur. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arthur or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
icon Daily Deals

Ground Truth

Share
 

Manage series 3462980
Content provided by Arthur. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arthur or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Brought to you by Arthur, the AI Performance Company, Ground Truth is a podcast featuring the brightest minds in AI, ML, and data science. Join us for engaging and thought-provoking conversations with industry leaders about everything from generative models to data privacy to fairness in AI.
  continue reading

4 episodes

Artwork

Ground Truth

updated

iconShare
 
Manage series 3462980
Content provided by Arthur. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Arthur or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Brought to you by Arthur, the AI Performance Company, Ground Truth is a podcast featuring the brightest minds in AI, ML, and data science. Join us for engaging and thought-provoking conversations with industry leaders about everything from generative models to data privacy to fairness in AI.
  continue reading

4 episodes

All episodes

×
 
Jacopo Tagliabue is currently the founder of Bauplan and an adjunct professor of ML Systems at NYU. He was also the co-founder and CTO of Tooso, a NLP / IR startup in San Francisco acquired by Coveo. He led Coveo’s AI and MLOps roadmap from scale-up to IPO, and built out Coveo Labs, an applied R&D practice rooted in collaboration, open source, and open science. Jacopo recently joined us at Arthur HQ for a chat with our Chief Scientist John Dickerson about: His experience from founder to acquisition through IPO What it’s like integrating an MLOps team into a larger company His new company, Bauplan Predictions for the future of natural language, and what’s overhyped in the space How recommender systems will impact the workforce, politics, and elections Watch the video, including Jacopo’s presentation prior to the Q&A, on YouTube: https://youtu.be/EAjoTnyVGFs Learn more about Arthur: https://www.arthur.ai Learn more about Jacopo: https://jacopotagliabue.it/ — [1:36] Tell us a bit about yourself, your path to tech, NLP, Tooso, what it was like going through an IPO, etc. [3:21] What is it like integrating an MLOps team into a larger company? And how does MLOps scale during that scale-up phase? [5:21] You’ve done a lot of publicity around talking about enabling applied research at a reasonable scale or at a startup scale. Not at Microsoft or Google scale—at a strong company, but not one with the deepest pockets in the world. Would you like to share any learnings from that? [9:37] Do you have suggestions for somebody going from zero to one building out their own MLOps stack right now? [12:08] Were there any learnings or feedback from the community regarding RecList that you’d like to share or that impacted future versions? [13:35] I know you’re founding a new company. Want to share a quick sentence or two on that? [14:41] What’s hot or what’s overhyped right now in natural language? Any short- or long-term predictions? [18:27] Audience Q&A [18:36] How do you think about the causal effects of multiple recommendations, one after another, like a banded-style evaluation? Do you think that’s the kind of thing that only big tech companies need to care about, or is the kind of thing that smaller companies should think about in the evaluation of the recommendation systems? [20:04] What’s the most exciting opportunity you see in recommender systems in the coming years? [21:23] You mentioned you had a contentious opinion about using ChatGPT to do model deployment. Can you expand on that? [22:17] What do you think about offline policy evaluation? [24:20] I know we’re talking a lot about how language models will change the workforce at large and put people out of their jobs and things like that. How do you think that advances in recommendations will impact larger career changes and workforce things? [26:05] What about recommending in the political space and how that is impacting elections, democracy, things like that?…
 
Diego Oppenheimer is an entrepreneur, product developer, and investor with an extensive background in all things data. He’s currently a Partner at Factory HQ and was previously an EVP at DataRobot as well as the founder & CEO of Algorithmia. Diego recently joined Arthur’s CEO, Adam Wenchel, for a fireside chat where they talked about: His founder journey at Algorithmia, which paved the way as one of the first MLOps companies His thoughts on the ChatGPT and LLM hype, and how he sees that playing out in the next few years The tooling needed to support those adopting foundation/language models His opinions on the term “LLMOps” What is currently missing from MLOps architecture that would allow people to scale their model production The safety risks and pros/cons of government involvement with AI Watch the video of the Diego's fireside chat on YouTube: https://bit.ly/40oDOSe Learn more about Arthur: https://bit.ly/40JXiC5 [1:22] As someone who has been deeply involved in the AI world for a long time, what’s your take on the current foundation model/LLM explosion? How big of a deal is it? [2:12] Do you think the set of people who are going to be interacting with and using this technology will grow beyond the traditional ML engineers and data scientists? [4:01] Tell us a little bit about Factory and what you all are doing. [6:42] Before Factory, tell us a little bit more about your founder’s journey at Algorithmia. [8:36] What were customer conversations like back in 2014 when no one was talking about MLOps? [9:48] A lot of investment is now going toward companies that are automating certain current human-based workflows through LLMs, and some are a very tiny layer on top of OpenAI or another off-the-shelf foundation model. How do you see this playing out over the next 6-12 months and where will the value be created? Which companies will become the pillars? [16:11] How do you see the “LLMOps” ecosystem growing and extending beyond MLOps? [19:44] In Arthur’s 4-year journey, we’ve seen customers go from having just a handful of models in production to, in some cases, hundreds—and soon enough, they’ll have tens of thousands. What, in your opinion, breaks when it comes to MLOps at this kind of scale? [21:36] What still frustrates you about the space and what can the community do to address some of that? [23:19] Do you think what we’re seeing with foundation models will stay within the realm of unstructured data, or will we see it with structured data at some point as well? [24:43] What do you think about clearing houses of shared models like HuggingFace? At what point can we stop creating so many similar models and just reuse existing ones that are close enough? [26:19] What do you make of the head training wrappers for LLMs as a business model—basically creating domain experts for customers. Is this a short-lived trend? [27:38] A letter came out recently from Elon Musk and some other AI folks, saying we should pause training on “giant models” for the next 6 months. What are your thoughts on that and how we should be thinking about AI safety with these models in general? [30:31] If you could conjure up a startup to automate any part of your life using LLMs, what would you like to see?…
 
Joining us at Arthur HQ in this episode is Dr. Rachel Cummings, an Associate Professor of Industrial Engineering & Operations Research at Columbia University who is known for her principled and practical work in differential and data privacy, crossing machine learning and public policy. She sat down for a chat with our Chief Scientist John Dickerson about: Her work in differential privacy, which is at the intersection of academia, industry, and policy How she got into the field of differential privacy, what she likes about it, and what she would change Her advice for people interested in academia, policy, or differential privacy in general What she’d say to present-day policymakers What areas of tech she thinks are overhyped and risky, and her predictions for the next decade Considerations around the tradeoff between privacy and accuracy Watch the video, including Rachel's research presentation prior to the Q&A, on Youtube : https://bit.ly/3Ki7lHc Learn more about Arthur : https://bit.ly/40JXiC5 Learn more about Rachel : https://bit.ly/40Ml6oE [2:48] Tell us more about yourself, what led you to Columbia and to economics/ML/differential privacy? [5:11] What advice do you have for someone who wants to move into differential privacy and privacy research writ large, whether it’s an academic/research focus or a policy/communications focus? [7:15] Do you have any advice for present-day policymakers? [8:52] If you could change one thing about the field, what would it be? [11:00] What are your predictions for the next 1-3 years? What’s overhyped right now? Do you have some hope for breakthroughs? Where can we see business impact or policy impact immediately? [12:42] What are your predictions for the next 10 years? What are the biggest opportunities or things you would focus on if you were just getting into the field? [14:57] What are some risks or things to look out for? Audience Q&A [16:21] Differential privacy’s goal is to not have to reason about background knowledge or prior belief, but people think about privacy in a manner that requires reasoning about priors. Is it possible that differential privacy is just not compatible with what people want from privacy? [20:39] How do you see the areas of cryptography and differential privacy cooperating in the future? [22:48] When you were structuring your research with your collaborators, were there particular industry verticals or companies that you had in mind as potential adopters or that you would be interested in partnering with as the research progresses further?…
 
Welcome to Ground Truth , a new series featuring the brightest minds in AI, ML, and data science. We’re Arthur, the AI Performance Company. In this podcast, we’ll introduce you to the leaders, the builders, and the movers & shakers in both industry and academia by sharing insights from the Ground Truth events we host at our headquarters in New York City. In each episode, Arthur’s CEO Adam Wenchel, Chief Scientist John Dickerson, or Strategic Account Director Victoria Vassileva will invite the field’s leaders and luminaries to talk about topics ranging from generative models to data privacy to fairness and ethics in AI. Learn more about Arthur: https://www.arthur.ai Follow Arthur on Twitter: https://www.twitter.com/itsArthurAI Follow Arthur on LinkedIn: https://www.linkedin.com/company/arthurai…
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

Quick Reference Guide

Listen to this show while you explore
Play