show episodes
 
What if technology could understand people in the same way that people understand one another? Tune in as Affectiva, the pioneer of Emotion AI, endeavors to humanize technology as a new Smart Eye Company. The Human-Centric AI podcast dissects how we can put the human before the artificial as AI manifests in our daily lives, with insights from the world’s top thinkers in automotive, market research, aviation, robotics, education, academia and beyond.
  continue reading
 
The Know Thyself Podcast is a place to dive deep into the perennial questions of life; such as “Who am I?”, “Why am I here?” and “What is my purpose?” Each week the host André Duqum connects with various teachers, spiritual leaders, doctors, heart-led creators and storytellers on topics such as the true nature of ‘Self’, consciousness, philosophy, health optimization, and personal growth. These conversations are aimed at supporting individuals on their awakening journey, reducing human suffe ...
  continue reading
 
All about ai, startups, and the future - discussing topics that range from technology (AI, IoT, Big Data) to technologies' impact on humans (Work, Play, Culture) and the future of everything in any sector - retail, banking, technology, hiring, and more. We always look for Innovators like you to interview for our weekly podcast. Let us know if you have any stories on ai, disruption, or the future that you would like to share (happy or unfortunate). Support this podcast: https://podcasters.spo ...
  continue reading
 
A weekly wrap of the “must-know” developments in Marketing, Media, Agency and Technology for leaders and emerging leaders in the industry. Veteran industry journalist and Mi3 Executive Editor Paul McIntyre talks each week with guest marketers who are in the know on what matters at the nexus of marketing, agencies, media and technology. Powered mostly by Human Intelligence (HI).
  continue reading
 
The podcast presents valuable insights from contact center leaders, tailor-made for their industry peers. We cover a diverse array of topics, such as AI integration, agent turnover management, revenue impact assessment, and transitioning perceptions from cost to value centers for starters.
  continue reading
 
Dive into “Compromising Positions”, the unique, new podcast designed to iron out the wrinkles in the relationship between cybersecurity teams and other tech professionals. We’re taking the ‘security as a blocker’ stereotype head-on, promoting a shared language and mutual understanding. We’ll turn those ‘compromising positions’ into ‘compromising solutions’, helping security pros and tech teams collaborate more effectively for a smoother, safer digital journey. Every week we will be joined by ...
  continue reading
 
Artwork
 
The AI in Automotive podcast is a platform for conversations about the rapidly growing role of Artificial Intelligence and Machine Learning in the automotive and mobility industries. Host Jayesh Jagasia speaks to experts in the domain for free-wheeling conversations on how AI is shaping the future of the automotive industry.
  continue reading
 
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/s ...
  continue reading
 
Artwork

1
From Our Neurons to Yours

Wu Tsai Neurosciences Institute at Stanford University, Nicholas Weiler

Unsubscribe
Unsubscribe
Monthly+
 
From Our Neurons to Yours is a show that crisscrosses scientific disciplines to bring you to the frontiers of brain science, produced by the Wu Tsai Neurosciences Institute at Stanford University. Each week, we ask leading scientists to help us understand the three pounds of matter within our skulls and how new discoveries, treatments, and technologies are transforming our relationship with the brain.
  continue reading
 
80 Level Podcast is an episodic show for game developers, digital artists, animators, video game enthusiasts, CGI and VFX specialists. Join us to learn about new workflows, discuss new tools and share your work.
  continue reading
 
Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, ...
  continue reading
 
Loading …
show series
 
Today we are joined by top physicist and inventor of the microprocessor & touch screen, Federico Faggin, for an intriguing conversation into the nature of reality. Federico once had a materialistic scientific perspective on consciousness and reality until one day a spontaneous spiritual awakening changed his perspective forever. In this episode he …
  continue reading
 
In this episode, Chris dives into the controversial and thought-provoking topic of AI companions—AI boyfriends, girlfriends, and friends—discussing the growing trend and public reactions surrounding them. He questions why AI companions are often seen as a terrible thing, drawing parallels to past phenomena, such as people forming emotional bonds wi…
  continue reading
 
This paper explores the correlation between learning rate, batch size, and training tokens, proposing a new Power scheduler that optimizes performance across various model sizes and architectures. https://arxiv.org/abs//2408.13359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
  continue reading
 
This paper explores the correlation between learning rate, batch size, and training tokens, proposing a new Power scheduler that optimizes performance across various model sizes and architectures. https://arxiv.org/abs//2408.13359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
  continue reading
 
This paper presents a quantitative law governing contextualized token embeddings in LLMs, revealing equal contributions from all layers to prediction accuracy, enhancing understanding and guiding LLM development practices. https://arxiv.org/abs//2408.13442 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
  continue reading
 
This paper presents a quantitative law governing contextualized token embeddings in LLMs, revealing equal contributions from all layers to prediction accuracy, enhancing understanding and guiding LLM development practices. https://arxiv.org/abs//2408.13442 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
  continue reading
 
Michael Fulvio, Director of Customer Experience at SNIPES, advises against the usual focus on Customer Satisfaction (CSAT) as a KPI. He stresses using diverse data points like incident rates, first reply times, and fulfillment metrics to address customer issues, improve operations, and drive revenue, thus enhancing the customer experience. By broad…
  continue reading
 
The effectiveness “revolution” is colliding with the AI-spawned efficiency uprising and it’s leaping the early consensus AI use cases in marketing around automating personalised content and communications. So much so Mark Ritson choked on his Wellfleet oysters when Jon Lombardo and Peter Weinberg told him they were leaving top jobs at the LinkedIn-…
  continue reading
 
This paper presents a framework using a small language model for initial hallucination detection, followed by a large language model for detailed explanations, optimizing real-time interpretable detection. https://arxiv.org/abs//2408.12748 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
This paper presents a framework using a small language model for initial hallucination detection, followed by a large language model for detailed explanations, optimizing real-time interpretable detection. https://arxiv.org/abs//2408.12748 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
This study explores how diffusion models learn compositional representations through controlled experiments, revealing their ability to encode features but limited interpolation over unseen values, enhancing training efficiency. https://arxiv.org/abs//2408.13256 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
  continue reading
 
This study explores how diffusion models learn compositional representations through controlled experiments, revealing their ability to encode features but limited interpolation over unseen values, enhancing training efficiency. https://arxiv.org/abs//2408.13256 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
  continue reading
 
FERRET enhances adversarial prompt generation for large language models, improving attack success rates and efficiency over RAINBOW TEAMING while ensuring effective prompts across various model sizes. https://arxiv.org/abs//2408.10701 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://…
  continue reading
 
FERRET enhances adversarial prompt generation for large language models, improving attack success rates and efficiency over RAINBOW TEAMING while ensuring effective prompts across various model sizes. https://arxiv.org/abs//2408.10701 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://…
  continue reading
 
AiM is an autoregressive image generative model using Mamba architecture, achieving superior quality and speed in image generation while maintaining efficient long-sequence modeling capabilities. https://arxiv.org/abs//2408.12245 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
  continue reading
 
AiM is an autoregressive image generative model using Mamba architecture, achieving superior quality and speed in image generation while maintaining efficient long-sequence modeling capabilities. https://arxiv.org/abs//2408.12245 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
  continue reading
 
The paper investigates LLMs' challenges with real-world tabular data, proposing the TableBench benchmark and TABLELLM model, highlighting significant gaps between academic performance and industrial application. https://arxiv.org/abs//2408.09174 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcast…
  continue reading
 
The paper investigates LLMs' challenges with real-world tabular data, proposing the TableBench benchmark and TABLELLM model, highlighting significant gaps between academic performance and industrial application. https://arxiv.org/abs//2408.09174 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcast…
  continue reading
 
FocusLLM enhances decoder-only LLMs by efficiently processing long contexts, improving performance on long-context tasks while reducing training costs and maintaining strong language modeling capabilities. https://arxiv.org/abs//2408.11745 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
FocusLLM enhances decoder-only LLMs by efficiently processing long contexts, improving performance on long-context tasks while reducing training costs and maintaining strong language modeling capabilities. https://arxiv.org/abs//2408.11745 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
  continue reading
 
Sapiens is a versatile model family for human-centric vision tasks, achieving state-of-the-art performance through self-supervised pretraining and scalable design, excelling in pose estimation, segmentation, depth, and normal prediction. https://arxiv.org/abs//2408.12569 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@…
  continue reading
 
Sapiens is a versatile model family for human-centric vision tasks, achieving state-of-the-art performance through self-supervised pretraining and scalable design, excelling in pose estimation, segmentation, depth, and normal prediction. https://arxiv.org/abs//2408.12569 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@…
  continue reading
 
Show-o is a unified transformer model that integrates multimodal understanding and generation, outperforming existing models in various vision-language tasks while supporting diverse input-output modalities. https://arxiv.org/abs//2408.12528 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: h…
  continue reading
 
Show-o is a unified transformer model that integrates multimodal understanding and generation, outperforming existing models in various vision-language tasks while supporting diverse input-output modalities. https://arxiv.org/abs//2408.12528 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: h…
  continue reading
 
Jamba-1.5 introduces instruction-tuned large language models with high throughput, low memory usage, and extensive context length, outperforming competitors while being publicly available under an open model license. https://arxiv.org/abs//2408.12570 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
Jamba-1.5 introduces instruction-tuned large language models with high throughput, low memory usage, and extensive context length, outperforming competitors while being publicly available under an open model license. https://arxiv.org/abs//2408.12570 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
Hermes 3 is a neutrally-aligned instruct-tuned model with strong reasoning and creativity, achieving state-of-the-art performance on benchmarks, with weights available on Hugging Face. https://arxiv.org/abs//2408.11857 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
  continue reading
 
Hermes 3 is a neutrally-aligned instruct-tuned model with strong reasoning and creativity, achieving state-of-the-art performance on benchmarks, with weights available on Hugging Face. https://arxiv.org/abs//2408.11857 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
  continue reading
 
In this episode of thinkfuture, Chris broadcasts from Greece and dives into the fascinating idea of AI-generated comedy. With AI increasingly being used in creative fields, Chris discusses why it might be surprising to some that AI could also take on comedy—a field traditionally seen as distinctly human. He explains that since AI is essentially bui…
  continue reading
 
Marketing mix modelling (MMM) only works if brands grant their agencies access to critical business data – and many don’t in a perplexing and decades-long challenge. But equally, agencies can be guilty of slowing media pricing and audience data into their client MMM models, rounding out the two-way data conundrum. It’s ironic given all the talk of …
  continue reading
 
https://arxiv.org/abs//2408.11796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2408.11796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
This paper explores spectral dynamics of weights in deep learning, revealing optimization biases, enhancing weight decay effects, and distinguishing between memorizing and generalizing networks across various tasks. https://arxiv.org/abs//2408.11804 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
This paper explores spectral dynamics of weights in deep learning, revealing optimization biases, enhancing weight decay effects, and distinguishing between memorizing and generalizing networks across various tasks. https://arxiv.org/abs//2408.11804 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
This Episode we’re heading back into the vaults to bring you the unabridged version of our fantastic and extremely popular interview with Bec McKeown, a chartered psychologist with extensive experience in carrying out applied research for organisations including the UK Ministry of Defence and the founder and director of Mind Science, an independent…
  continue reading
 
The paper challenges the Linear Representation Hypothesis, showing that gated recurrent neural networks encode token sequences using magnitude rather than direction, suggesting broader interpretability in neural network research. https://arxiv.org/abs//2408.10920 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pa…
  continue reading
 
The paper challenges the Linear Representation Hypothesis, showing that gated recurrent neural networks encode token sequences using magnitude rather than direction, suggesting broader interpretability in neural network research. https://arxiv.org/abs//2408.10920 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pa…
  continue reading
 
Transfusion is a multi-modal training method combining language modeling and diffusion, achieving superior performance in generating images and text with models up to 7B parameters. https://arxiv.org/abs//2408.11039 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
  continue reading
 
Transfusion is a multi-modal training method combining language modeling and diffusion, achieving superior performance in generating images and text with models up to 7B parameters. https://arxiv.org/abs//2408.11039 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
  continue reading
 
In this episode, Chris interviews Michelle, the Chief Marketing Officer of MindTrip, a new AI-powered travel platform designed to transform the travel planning experience. Michelle shares her extensive background as a startup executive and advisor, as well as insights from her upcoming book, "Grow Up: Take Your Startup to the Next Level," which foc…
  continue reading
 
In this episode of thinkfuture, Chris broadcasts from the heart of Evia, Greece, and dives into the concept of JOMO—Joy of Missing Out—as a counter to the pervasive fear of missing out (FOMO) that has gripped modern society. Surrounded by the birthplace of democracy and drama, Chris reflects on how our constant connection to devices and social medi…
  continue reading
 
Grammy-nominated R&B artist Omarion opens up about the deeply personal experiences that have shaped his spiritual and creative growth. From navigating betrayal and redefining joy, to discovering the healing power of music and meditation, Omarion's story is one of unwavering resilience and self-discovery. Raised by a young single mother, Omarion sha…
  continue reading
 
The paper presents MOHAWK, a method for distilling Transformers into state space models, achieving strong performance with significantly less training data and computational resources. https://arxiv.org/abs//2408.10189 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
  continue reading
 
The paper presents MOHAWK, a method for distilling Transformers into state space models, achieving strong performance with significantly less training data and computational resources. https://arxiv.org/abs//2408.10189 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
  continue reading
 
Neelam Sandhu, Former Chief Elite Customer Officer, Chief Marketing Officer, & SVP Sustainability, shares insights on transforming sales strategies to focus on customer success. She discusses aligning sales with customer-centric approaches, the impact of technology on relationships, and key strategies for long-term engagement. Prioritizing customer…
  continue reading
 
Part Two: After last week's instalment with S4 Capital's founder and former WPP boss, Sir Martin Sorrell – in which he explained why the market cap of his next generation marketing services firm had plummeted from £5 billion to £300 million in the past three years – he's back for part two. We cover the consolidation of the $700 billion global digit…
  continue reading
 
This paper proposes using canonical codecs for image and video generation in autoregressive models, demonstrating improved efficiency and effectiveness over traditional pixel-based and vector quantization methods. https://arxiv.org/abs//2408.08459 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
This paper proposes using canonical codecs for image and video generation in autoregressive models, demonstrating improved efficiency and effectiveness over traditional pixel-based and vector quantization methods. https://arxiv.org/abs//2408.08459 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
  continue reading
 
Loading …

Quick Reference Guide