Player FM - Internet Radio Done Right
113 subscribers
Checked 3d ago
Added five years ago
Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
S
State Secrets: Inside The Making Of The Electric State


1 Family Secrets: Chris Pratt & Millie Bobby Brown Share Stories From Set 22:08
22:08
Play Later
Play Later
Lists
Like
Liked22:08
Host Francesca Amiker sits down with directors Joe and Anthony Russo, producer Angela Russo-Otstot, stars Millie Bobby Brown and Chris Pratt, and more to uncover how family was the key to building the emotional core of The Electric State . From the Russos’ own experiences growing up in a large Italian family to the film’s central relationship between Michelle and her robot brother Kid Cosmo, family relationships both on and off of the set were the key to bringing The Electric State to life. Listen to more from Netflix Podcasts . State Secrets: Inside the Making of The Electric State is produced by Netflix and Treefort Media.…
The New Stack Podcast
Mark all (un)played …
Manage series 2574278
Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
301 episodes
Mark all (un)played …
Manage series 2574278
Content provided by The New Stack Podcast and The New Stack. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The New Stack Podcast and The New Stack or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
…
continue reading
301 episodes
All episodes
×T
The New Stack Podcast


AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management. Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet. As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights. Learn more from The New Stack about emerging trends in AI agents: Lessons From Kubernetes and the Cloud Should Steer the AI Revolution AI Agents: Why Workflows Are the LLM Use Case to Watch Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift: Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI. Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows. Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure. Learn more from The New Stack about evolution to AI agents: How AI Agents Are Starting To Automate the Enterprise Can You Trust AI To Be Your Data Analyst? Agentic AI is the New Web App, and Your AI Strategy Must Evolve Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


Amazon Q Developer is streamlining the software development lifecycle by integrating AI-powered tools into AWS. In an interview at AWS in Seattle, Srini Iragavarapu, director of generative AI Applications and Developer Experiences at AWS, discussed how Amazon Q Developer enhances the developer experience. Initially focused on inline code completions, Amazon Q Developer evolved by incorporating generative AI models like Amazon Nova and Anthropic models, improving recommendations and accelerating development. British Telecom reported a 37% acceptance rate for AI-generated code. Beyond code completion, Amazon Q Developer enables developers to interact with Q for code reviews, test generation, and migrations. AWS also developed agentic frameworks to automate undifferentiated tasks, such as upgrading Java versions. Iragavarapu noted that internally, AWS used Q Developer to migrate 30,000 production applications, saving $260 million annually. The platform offers code generation, testing suites, RAG capabilities, and access to AWS custom chips, further flattening the SDLC by automating routine work. Listen to The New Stack Makers for the full discussion. Learn more from The New Stack about Amazon Q Developer: Amazon Q Developer Now Handles Your Entire Code Pipeline Amazon Q Apps: AI-Powered Development for All Amazon Revamps Developer AI With Code Conversion, Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 OAuth Works for AI Agents but Scaling is Another Question 25:36
25:36
Play Later
Play Later
Lists
Like
Liked25:36
Maya Kaczorowski noticed that AI identity and AI agent identity concerns were emerging from outside the security industry, rather than from CISOs and security leaders. She concluded that OAuth, the open standard for authentication, already serves the purpose of granting access without exposing passwords. Kaczorowski, a respected technologist and founder of Oblique, a startup focused on self-serve access controls, recently wrote about OAuth and AI agents and shared her insights on this episode of The New Stack Makers. She noted that developers see AI agents as extensions of themselves, granting them limited access to data and capabilities—precisely what OAuth is designed to handle. The challenges with AI agent identity are vast, involving different approaches to authentication, such as those explored by companies like AuthZed. While existing authorization models like RBAC or ABAC may still apply, the real challenge lies in scale. The exponential growth of AI-related entities—from users to LLMs—could mean even small organizations manage hundreds of thousands of agents. Future solutions must accommodate this massive scale efficiently. For the full discussion, check out The New Stack Makers interview with Kaczorowski. Learn more from The New Stack about OAuth requirements for AI Agents: OAuth 2.0: A Standard in Name Only? AI Agents Are Redefining the Future of Identity and Access Management Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 LLMs and AI Agents Evolving Like Programming Languages 28:08
28:08
Play Later
Play Later
Lists
Like
Liked28:08
The rise of the World Wide Web enabled developers to build tools and platforms on top of it. Similarly, the advent of large language models (LLMs) allows for creating new AI-driven tools, such as autonomous agents that interact with LLMs, execute tasks, and make decisions. However, verifying these decisions is crucial, and critical reasoning may be a solution, according to Yam Marcovitz, tech lead at Parlant.io and CEO of emcie.co. Marcovitz likens LLM development to the evolution of programming languages, from punch cards to modern languages like Python. Early LLMs started with small transformer models, leading to systems like BERT and GPT-3. Now, instead of mere text auto-completion, models are evolving to enable better reasoning and complex instructions. Parlant uses "attentive reasoning queries (ARQs)" to maintain consistency in AI responses, ensuring near-perfect accuracy. Their approach balances structure and flexibility, preventing models from operating entirely autonomously. Ultimately, Marcovitz argues that subjectivity in human interpretation extends to LLMs, making perfect objectivity unrealistic. Learn more from The New Stack about the evolution of LLMs: AI Alignment in Practice: What It Means and How to Get It Agentic AI: The Next Frontier of AI Power Make the Most of AI Agents: Tips and Tricks for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 Writing Code About Your Infrastructure? That's a Losing Race 31:21
31:21
Play Later
Play Later
Lists
Like
Liked31:21
Adam Jacob, CEO of System Initiative, discusses a shift in infrastructure automation—moving from writing code to building models that enable rapid simulations and collaboration. In The New Stack Makers, he compares this approach to Formula One racing, where teams use high-fidelity models to simulate race conditions, optimizing performance before hitting the track. System Initiative applies this concept to enterprise automation, creating a model that understands how infrastructure components interact. This enables fast, multiplayer feedback loops, simplifying complex tasks while enhancing collaboration. Engineers can extend the system by writing small, reactive JavaScript functions that automate processes, such as transforming existing architectures into new ones. The platform visually represents these transformations, making automation more intuitive and efficient. By leveraging models instead of traditional code-based infrastructure management, System Initiative enhances agility, reduces complexity, and improves DevOps collaboration. To explore how this ties into the concept of the digital twin, listen to the full New Stack Makers episode. Learn more from The New Stack about System Initiative: Beyond Infrastructure as Code: System Initiative Goes Live How System Initiative Treats AWS Components as Digital Twins System Initiative Code Now Open Source Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 OpenTelemetry: What’s New with the 2nd Biggest CNCF Project? 30:14
30:14
Play Later
Play Later
Lists
Like
Liked30:14
Morgan McLean, co-founder of OpenTelemetry and senior director of product management at Splunk, has long tackled the challenges of observability in large-scale systems. In a conversation with Alex Williams on The New Stack Makers , McLean reflected on his early frustrations debugging high-scale services and the need for better observability tools. OpenTelemetry, formed in 2019 from OpenTracing and OpenCensus, has since become a key part of modern observability strategies. As a Cloud Native Computing Foundation (CNCF) incubating project, it’s the second most active open source project after Kubernetes, with over 1,200 developers contributing monthly. McLean highlighted OpenTelemetry’s role in solving scaling challenges, particularly in Kubernetes environments, by standardizing distributed tracing, application metrics, and data extraction. Looking ahead, profiling is set to become the fourth major observability signal alongside logs, tracing, and metrics, with general availability expected in 2025. McLean emphasized ongoing improvements, including automation and ease of adoption, predicting even faster OpenTelemetry adoption as friction points are resolved. Learn more from The New Stack about the latest trends in Open Telemetry: What Is OpenTelemetry? The Ultimate Guide Observability in 2025: OpenTelemetry and AI to Fill In Gaps Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 What’s Driving the Rising Cost of Observability? 24:55
24:55
Play Later
Play Later
Lists
Like
Liked24:55
Observability is expensive because traditional tools weren’t designed for the complexity and scale of modern cloud-native systems, explains Christine Yen, CEO of Honeycomb.io. Logging tools, while flexible, were optimized for manual, human-scale data reading. This approach struggles with the massive scale of today’s software, making logging slow and resource-intensive. Monitoring tools, with their dashboards and metrics, prioritized speed over flexibility, which doesn’t align with the dynamic nature of containerized microservices. Similarly, traditional APM tools relied on “magical” setups tailored for consistent application environments like Rails, but they falter in modern polyglot infrastructures with diverse frameworks. Additionally, observability costs are rising due to evolving demands from DevOps, platform engineering, and site reliability engineering (SRE). Practices like service-level objectives (SLOs) emphasize end-user experience, pushing teams to track meaningful metrics. However, outdated observability tools often hinder this, forcing teams to cut back on crucial data. Yen highlights the potential of AI and innovations like OpenTelemetry to address these challenges. Learn more from The New Stack about the latest trends in observability: Honeycomb.io’s Austin Parker: OpenTelemetry In-Depth Observability in 2025: OpenTelemetry and AI to Fill In Gaps Observability and AI: New Connections at KubeCon Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 How Oracle Is Meeting the Infrastructure Needs of AI 27:28
27:28
Play Later
Play Later
Lists
Like
Liked27:28
Generative AI is a data-driven story with significant infrastructure and operational implications, particularly around the rising demand for GPUs, which are better suited for AI workloads than CPUs. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Sudha Raghavan, SVP for Developer Platform at Oracle Cloud Infrastructure, discussed how AI’s rapid adoption has reshaped infrastructure needs. The release of ChatGPT triggered a surge in GPU demand, with organizations requiring GPUs for tasks ranging from testing workloads to training large language models across massive GPU clusters. These workloads run continuously at peak power, posing challenges such as high hardware failure rates and energy consumption. Oracle is addressing these issues by building GPU superclusters and enhancing Kubernetes functionality. Tools like Oracle’s Node Manager simplify interactions between Kubernetes and GPUs, providing tailored observability while maintaining Kubernetes’ user-friendly experience. Raghavan emphasized the importance of stateful job management and infrastructure innovations to meet the demands of modern AI workloads. Learn more from The New Stack about how Oracle is addressing the GPU demand for AI workloads with its GPU superclusters and enhancing Kubernetes functionality: Oracle Code Assist, Java-Optimized, Now in Beta Oracle’s Code Assist: Fashionably Late to the GenAI Party Oracle Unveils Java 23: Simplicity Meets Enterprise Power Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Arm: See a Demo About Migrating a x86-Based App to ARM64 21:28
21:28
Play Later
Play Later
Lists
Like
Liked21:28
The hardware industry is surging, driven by AI's demanding workloads, with Arm—a 35-year-old pioneer in processor IP—playing a pivotal role. In an episode of The New Stack Makers recorded at KubeCon + CloudNativeCon North America, Pranay Bakre, principal solutions engineer at Arm, discussed how Arm is helping organizations migrate and run applications on its technology. Bakre highlighted Arm’s partnership with hyperscalers like AWS, Google, Microsoft, and Oracle, showcasing processors such as AWS Graviton and Google Axion, built on Arm’s power-efficient, cost-effective Neoverse IP. This design ethos has spurred wide adoption, with 90-95% of CNCF projects supporting native Arm binaries. Attendees at Arm’s booth frequently inquired about its plans to support AI workloads. Bakre noted the performance advantages of Arm-based infrastructure, delivering up to 60% workload improvements over legacy architectures. The episode also features a demo on migrating x86 applications to ARM64 in both cloud and containerized environments, emphasizing Arm’s readiness for the AI era. Learn more from The New Stack about Arm: Arm Eyes AI with Its Latest Neoverse Cores and Subsystem Big Three in Cloud Prompts ARM to Rethink Software Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Heroku Moved Twelve-Factor Apps to Open Source. What’s Next? 22:54
22:54
Play Later
Play Later
Lists
Like
Liked22:54
Heroku has open-sourced its Twelve-Factor App methodology, initially created in 2011 to help developers build portable, resilient cloud applications. Heroku CTO Gail Frederick announced this shift at KubeCon + CloudNativeCon North America, explaining the move aims to involve the community in modernizing the framework. While the methodology inspired a generation of cloud developers, certain factors are now outdated, such as the focus on logs as event streams. Frederick highlighted the need for updates to address current practices like telemetry and metrics visualization, reflecting the rise of OpenTelemetry. The updated Twelve-Factor methodology will expand to accommodate modern cloud-native realities, such as deploying interconnected systems of apps with diverse backing services. Planned enhancements include supporting documents, reference architectures, and code examples illustrating the principles in action. Success will be measured by its applicability to use cases involving edge computing, IoT, serverless, and distributed systems. Heroku views this open-source effort as an opportunity to redefine best practices for the next era of cloud development. Learn more from The New Stack about Heroku: How Heroku Is Positioned To Help Ops Engineers in the GenAI Era The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
T
The New Stack Podcast


1 How Falco Brought Real-Time Observability to Infrastructure 19:27
19:27
Play Later
Play Later
Lists
Like
Liked19:27
Falco, an open-source runtime observability and security tool, was created by Sysdig founder Loris Degioanni to collect real-time system events directly from the kernel. Leveraging eBPF technology for improved safety and performance, Falco gathers data like pod names and namespaces, correlating them with customizable rules. Unlike static analysis tools, it operates in real-time, monitoring events as they occur. In this episode of The New Stack Makers, TNS Editor-in-Chief, Heather Joslyn spoke with Thomas Labarussias, Senior Developer Advocate at Sysdig, Leonardo Grasso, Open Source Tech Lead Manager at Sysdig and Luca Guerra, Sr. Open Source Engineer at Sysdig to get the latest update on Falco. Graduating from the Cloud Native Computing Foundation (CNCF) in February 2023 after entering its sandbox six years prior, Falco’s maintainers have focused on technical maturity and broad usability. This includes simplifying installations across diverse environments, thanks in part to advancements from the Linux Foundation. Looking ahead, the team is enhancing core functionalities, including more customizable rules and alert formats. A key innovation is Falco Talon, introduced in September 2023, which provides a no-code response engine to link alerts with real-time remediation actions. Talon addresses a longstanding gap in automating responses within the Falco ecosystem, advancing its capabilities for runtime security. Learn more from The New Stack about Falco: Falco Is a CNCF Graduate. Now What? Falco Plugins Bring New Data Sources to Real-Time Security eBPF Tools: An Overview of Falco, Inspektor Gadget, Hubble and Cilium Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 How cert-manager Got to 500 Million Downloads a Month 23:18
23:18
Play Later
Play Later
Lists
Like
Liked23:18
Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly. Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base. With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager’s impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments. Learn more from The New Stack about cert-manager: Jetstack’s cert-manager Joins the CNCF Sandbox of Cloud Native Technologies Jetstack Secure Promises to Ease Kubernetes TLS Security Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 Why Are So Many Developers Out of Work in 2024? 21:10
21:10
Play Later
Play Later
Lists
Like
Liked21:10
The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous. Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce. Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs. Learn more from The New Stack about developer talent, skills and needs: Top Developer Skills for AI and Cloud Jobs 5 Software Development Skills AI Will Render Obsolete Cloud Native Skill Gaps are Killing Your Gains Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
T
The New Stack Podcast


1 MapLibre: How a Fork Became a Thriving Open Source Project 25:50
25:50
Play Later
Play Later
Lists
Like
Liked25:50
When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products. In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors. Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur. Learn more from The New Stack about forking open source projects: Why Do Open Source Projects Fork? OpenSearch: How the Project Went From Fork to Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.