What if technology could understand people in the same way that people understand one another? Tune in as Affectiva, the pioneer of Emotion AI, endeavors to humanize technology as a new Smart Eye Company. The Human-Centric AI podcast dissects how we can put the human before the artificial as AI manifests in our daily lives, with insights from the world’s top thinkers in automotive, market research, aviation, robotics, education, academia and beyond.
…
continue reading
The Know Thyself Podcast is a place to dive deep into the perennial questions of life; such as “Who am I?”, “Why am I here?” and “What is my purpose?” Each week the host André Duqum connects with various teachers, spiritual leaders, doctors, heart-led creators and storytellers on topics such as the true nature of ‘Self’, consciousness, philosophy, health optimization, and personal growth. These conversations are aimed at supporting individuals on their awakening journey, reducing human suffe ...
…
continue reading
All about ai, startups, and the future - discussing topics that range from technology (AI, IoT, Big Data) to technologies' impact on humans (Work, Play, Culture) and the future of everything in any sector - retail, banking, technology, hiring, and more. We always look for Innovators like you to interview for our weekly podcast. Let us know if you have any stories on ai, disruption, or the future that you would like to share (happy or unfortunate). Support this podcast: https://podcasters.spo ...
…
continue reading
A weekly wrap of the “must-know” developments in Marketing, Media, Agency and Technology for leaders and emerging leaders in the industry. Veteran industry journalist and Mi3 Executive Editor Paul McIntyre talks each week with guest marketers who are in the know on what matters at the nexus of marketing, agencies, media and technology. Powered mostly by Human Intelligence (HI).
…
continue reading
The podcast presents valuable insights from contact center leaders, tailor-made for their industry peers. We cover a diverse array of topics, such as AI integration, agent turnover management, revenue impact assessment, and transitioning perceptions from cost to value centers for starters.
…
continue reading
Dive into “Compromising Positions”, the unique, new podcast designed to iron out the wrinkles in the relationship between cybersecurity teams and other tech professionals. We’re taking the ‘security as a blocker’ stereotype head-on, promoting a shared language and mutual understanding. We’ll turn those ‘compromising positions’ into ‘compromising solutions’, helping security pros and tech teams collaborate more effectively for a smoother, safer digital journey. Every week we will be joined by ...
…
continue reading
The AI in Automotive podcast is a platform for conversations about the rapidly growing role of Artificial Intelligence and Machine Learning in the automotive and mobility industries. Host Jayesh Jagasia speaks to experts in the domain for free-wheeling conversations on how AI is shaping the future of the automotive industry.
…
continue reading
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/s ...
…
continue reading
From Our Neurons to Yours is a show that crisscrosses scientific disciplines to bring you to the frontiers of brain science, produced by the Wu Tsai Neurosciences Institute at Stanford University. Each week, we ask leading scientists to help us understand the three pounds of matter within our skulls and how new discoveries, treatments, and technologies are transforming our relationship with the brain.
…
continue reading
80 Level Podcast is an episodic show for game developers, digital artists, animators, video game enthusiasts, CGI and VFX specialists. Join us to learn about new workflows, discuss new tools and share your work.
…
continue reading
Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, ...
…
continue reading
1
E111 - Federico Faggin, Top Physicist:“Science & Spirituality Merge in this New Theory of Consciousness”
2:04:38
2:04:38
Play later
Play later
Lists
Like
Liked
2:04:38
Today we are joined by top physicist and inventor of the microprocessor & touch screen, Federico Faggin, for an intriguing conversation into the nature of reality. Federico once had a materialistic scientific perspective on consciousness and reality until one day a spontaneous spiritual awakening changed his perspective forever. In this episode he …
…
continue reading
In this episode, Chris dives into the controversial and thought-provoking topic of AI companions—AI boyfriends, girlfriends, and friends—discussing the growing trend and public reactions surrounding them. He questions why AI companions are often seen as a terrible thing, drawing parallels to past phenomena, such as people forming emotional bonds wi…
…
continue reading
1
[QA] Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler
8:25
8:25
Play later
Play later
Lists
Like
Liked
8:25
This paper explores the correlation between learning rate, batch size, and training tokens, proposing a new Power scheduler that optimizes performance across various model sizes and architectures. https://arxiv.org/abs//2408.13359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
…
continue reading
1
Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler
13:02
13:02
Play later
Play later
Lists
Like
Liked
13:02
This paper explores the correlation between learning rate, batch size, and training tokens, proposing a new Power scheduler that optimizes performance across various model sizes and architectures. https://arxiv.org/abs//2408.13359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
…
continue reading
1
[QA] A Law of Next-Token Prediction in Large Language Models
7:40
7:40
Play later
Play later
Lists
Like
Liked
7:40
This paper presents a quantitative law governing contextualized token embeddings in LLMs, revealing equal contributions from all layers to prediction accuracy, enhancing understanding and guiding LLM development practices. https://arxiv.org/abs//2408.13442 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
…
continue reading
1
A Law of Next-Token Prediction in Large Language Models
6:45
6:45
Play later
Play later
Lists
Like
Liked
6:45
This paper presents a quantitative law governing contextualized token embeddings in LLMs, revealing equal contributions from all layers to prediction accuracy, enhancing understanding and guiding LLM development practices. https://arxiv.org/abs//2408.13442 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Ap…
…
continue reading
1
Episode 18 | Why You Shouldn’t Be Obsessed with CSAT
22:28
22:28
Play later
Play later
Lists
Like
Liked
22:28
Michael Fulvio, Director of Customer Experience at SNIPES, advises against the usual focus on Customer Satisfaction (CSAT) as a KPI. He stresses using diverse data points like incident rates, first reply times, and fulfillment metrics to address customer issues, improve operations, and drive revenue, thus enhancing the customer experience. By broad…
…
continue reading
1
Synthetic customers meet synthetic CMOs (and CFOs): Evidenza clones Sharp, Ritson, Binet & Field to build annual marketing plans in minutes; Mars, EY sign-up
47:24
47:24
Play later
Play later
Lists
Like
Liked
47:24
The effectiveness “revolution” is colliding with the AI-spawned efficiency uprising and it’s leaping the early consensus AI use cases in marketing around automating personalised content and communications. So much so Mark Ritson choked on his Wellfleet oysters when Jon Lombardo and Peter Weinberg told him they were leaving top jobs at the LinkedIn-…
…
continue reading
1
[QA] SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection
8:29
8:29
Play later
Play later
Lists
Like
Liked
8:29
This paper presents a framework using a small language model for initial hallucination detection, followed by a large language model for detailed explanations, optimizing real-time interpretable detection. https://arxiv.org/abs//2408.12748 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
SLM Meets LLM: Balancing Latency, Interpretability and Consistency in Hallucination Detection
9:51
9:51
Play later
Play later
Lists
Like
Liked
9:51
This paper presents a framework using a small language model for initial hallucination detection, followed by a large language model for detailed explanations, optimizing real-time interpretable detection. https://arxiv.org/abs//2408.12748 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
[QA] How Diffusion Models Learn to Factorize and Compose
8:14
8:14
Play later
Play later
Lists
Like
Liked
8:14
This study explores how diffusion models learn compositional representations through controlled experiments, revealing their ability to encode features but limited interpolation over unseen values, enhancing training efficiency. https://arxiv.org/abs//2408.13256 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
…
continue reading
1
How Diffusion Models Learn to Factorize and Compose
20:36
20:36
Play later
Play later
Lists
Like
Liked
20:36
This study explores how diffusion models learn compositional representations through controlled experiments, revealing their ability to encode features but limited interpolation over unseen values, enhancing training efficiency. https://arxiv.org/abs//2408.13256 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pap…
…
continue reading
1
[QA] FERRET: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique
7:48
7:48
Play later
Play later
Lists
Like
Liked
7:48
FERRET enhances adversarial prompt generation for large language models, improving attack success rates and efficiency over RAINBOW TEAMING while ensuring effective prompts across various model sizes. https://arxiv.org/abs//2408.10701 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://…
…
continue reading
1
FERRET: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique
17:44
17:44
Play later
Play later
Lists
Like
Liked
17:44
FERRET enhances adversarial prompt generation for large language models, improving attack success rates and efficiency over RAINBOW TEAMING while ensuring effective prompts across various model sizes. https://arxiv.org/abs//2408.10701 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://…
…
continue reading
1
[QA] Scalable Autoregressive Image Generation with Mamba
7:11
7:11
Play later
Play later
Lists
Like
Liked
7:11
AiM is an autoregressive image generative model using Mamba architecture, achieving superior quality and speed in image generation while maintaining efficient long-sequence modeling capabilities. https://arxiv.org/abs//2408.12245 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
…
continue reading
1
Scalable Autoregressive Image Generation with Mamba
17:27
17:27
Play later
Play later
Lists
Like
Liked
17:27
AiM is an autoregressive image generative model using Mamba architecture, achieving superior quality and speed in image generation while maintaining efficient long-sequence modeling capabilities. https://arxiv.org/abs//2408.12245 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
…
continue reading
1
[QA] TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
7:55
7:55
Play later
Play later
Lists
Like
Liked
7:55
The paper investigates LLMs' challenges with real-world tabular data, proposing the TableBench benchmark and TABLELLM model, highlighting significant gaps between academic performance and industrial application. https://arxiv.org/abs//2408.09174 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcast…
…
continue reading
1
TableBench: A Comprehensive and Complex Benchmark for Table Question Answering
21:59
21:59
Play later
Play later
Lists
Like
Liked
21:59
The paper investigates LLMs' challenges with real-world tabular data, proposing the TableBench benchmark and TABLELLM model, highlighting significant gaps between academic performance and industrial application. https://arxiv.org/abs//2408.09174 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcast…
…
continue reading
1
[QA] FocusLLM: Scaling LLM's Context by Parallel Decoding
7:35
7:35
Play later
Play later
Lists
Like
Liked
7:35
FocusLLM enhances decoder-only LLMs by efficiently processing long contexts, improving performance on long-context tasks while reducing training costs and maintaining strong language modeling capabilities. https://arxiv.org/abs//2408.11745 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
FocusLLM: Scaling LLM's Context by Parallel Decoding
20:55
20:55
Play later
Play later
Lists
Like
Liked
20:55
FocusLLM enhances decoder-only LLMs by efficiently processing long contexts, improving performance on long-context tasks while reducing training costs and maintaining strong language modeling capabilities. https://arxiv.org/abs//2408.11745 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
[QA] Sapiens: Foundation for Human Vision Models
7:49
7:49
Play later
Play later
Lists
Like
Liked
7:49
Sapiens is a versatile model family for human-centric vision tasks, achieving state-of-the-art performance through self-supervised pretraining and scalable design, excelling in pose estimation, segmentation, depth, and normal prediction. https://arxiv.org/abs//2408.12569 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@…
…
continue reading
1
Sapiens: Foundation for Human Vision Models
22:52
22:52
Play later
Play later
Lists
Like
Liked
22:52
Sapiens is a versatile model family for human-centric vision tasks, achieving state-of-the-art performance through self-supervised pretraining and scalable design, excelling in pose estimation, segmentation, depth, and normal prediction. https://arxiv.org/abs//2408.12569 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@…
…
continue reading
1
[QA] Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
7:25
7:25
Play later
Play later
Lists
Like
Liked
7:25
Show-o is a unified transformer model that integrates multimodal understanding and generation, outperforming existing models in various vision-language tasks while supporting diverse input-output modalities. https://arxiv.org/abs//2408.12528 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: h…
…
continue reading
1
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
28:14
28:14
Play later
Play later
Lists
Like
Liked
28:14
Show-o is a unified transformer model that integrates multimodal understanding and generation, outperforming existing models in various vision-language tasks while supporting diverse input-output modalities. https://arxiv.org/abs//2408.12528 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: h…
…
continue reading
1
[QA] Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
7:22
7:22
Play later
Play later
Lists
Like
Liked
7:22
Jamba-1.5 introduces instruction-tuned large language models with high throughput, low memory usage, and extensive context length, outperforming competitors while being publicly available under an open model license. https://arxiv.org/abs//2408.12570 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
…
continue reading
1
Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
16:53
16:53
Play later
Play later
Lists
Like
Liked
16:53
Jamba-1.5 introduces instruction-tuned large language models with high throughput, low memory usage, and extensive context length, outperforming competitors while being publicly available under an open model license. https://arxiv.org/abs//2408.12570 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
…
continue reading
Hermes 3 is a neutrally-aligned instruct-tuned model with strong reasoning and creativity, achieving state-of-the-art performance on benchmarks, with weights available on Hugging Face. https://arxiv.org/abs//2408.11857 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
…
continue reading
Hermes 3 is a neutrally-aligned instruct-tuned model with strong reasoning and creativity, achieving state-of-the-art performance on benchmarks, with weights available on Hugging Face. https://arxiv.org/abs//2408.11857 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
…
continue reading
1
1010 EXPLORING AI IN COMEDY: THE NEXT FRONTIER
6:34
6:34
Play later
Play later
Lists
Like
Liked
6:34
In this episode of thinkfuture, Chris broadcasts from Greece and dives into the fascinating idea of AI-generated comedy. With AI increasingly being used in creative fields, Chris discusses why it might be surprising to some that AI could also take on comedy—a field traditionally seen as distinctly human. He explains that since AI is essentially bui…
…
continue reading
1
MMM masterclass: Bupa’s open book on business data feeding Atomic 212° a benchmark for agency-client transparency and trust
40:00
40:00
Play later
Play later
Lists
Like
Liked
40:00
Marketing mix modelling (MMM) only works if brands grant their agencies access to critical business data – and many don’t in a perplexing and decades-long challenge. But equally, agencies can be guilty of slowing media pricing and audience data into their client MMM models, rounding out the two-way data conundrum. It’s ironic given all the talk of …
…
continue reading
1
[QA] LLM Pruning and Distillation in Practice: The Minitron Approach
7:23
7:23
Play later
Play later
Lists
Like
Liked
7:23
https://arxiv.org/abs//2408.11796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
…
continue reading
1
LLM Pruning and Distillation in Practice: The Minitron Approach
12:36
12:36
Play later
Play later
Lists
Like
Liked
12:36
https://arxiv.org/abs//2408.11796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
…
continue reading
1
[QA] Approaching Deep Learning through the Spectral Dynamics of Weights
7:26
7:26
Play later
Play later
Lists
Like
Liked
7:26
This paper explores spectral dynamics of weights in deep learning, revealing optimization biases, enhancing weight decay effects, and distinguishing between memorizing and generalizing networks across various tasks. https://arxiv.org/abs//2408.11804 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
…
continue reading
1
Approaching Deep Learning through the Spectral Dynamics of Weights
26:45
26:45
Play later
Play later
Lists
Like
Liked
26:45
This paper explores spectral dynamics of weights in deep learning, revealing optimization biases, enhancing weight decay effects, and distinguishing between memorizing and generalizing networks across various tasks. https://arxiv.org/abs//2408.11804 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
…
continue reading
1
EPISODE 42: Mind Science - Cyber Psychology 101
1:03:05
1:03:05
Play later
Play later
Lists
Like
Liked
1:03:05
This Episode we’re heading back into the vaults to bring you the unabridged version of our fantastic and extremely popular interview with Bec McKeown, a chartered psychologist with extensive experience in carrying out applied research for organisations including the UK Ministry of Defence and the founder and director of Mind Science, an independent…
…
continue reading
1
[QA] Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations
8:04
8:04
Play later
Play later
Lists
Like
Liked
8:04
The paper challenges the Linear Representation Hypothesis, showing that gated recurrent neural networks encode token sequences using magnitude rather than direction, suggesting broader interpretability in neural network research. https://arxiv.org/abs//2408.10920 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pa…
…
continue reading
1
Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations
21:03
21:03
Play later
Play later
Lists
Like
Liked
21:03
The paper challenges the Linear Representation Hypothesis, showing that gated recurrent neural networks encode token sequences using magnitude rather than direction, suggesting broader interpretability in neural network research. https://arxiv.org/abs//2408.10920 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_pa…
…
continue reading
1
[QA] Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
7:53
7:53
Play later
Play later
Lists
Like
Liked
7:53
Transfusion is a multi-modal training method combining language modeling and diffusion, achieving superior performance in generating images and text with models up to 7B parameters. https://arxiv.org/abs//2408.11039 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
…
continue reading
1
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
24:23
24:23
Play later
Play later
Lists
Like
Liked
24:23
Transfusion is a multi-modal training method combining language modeling and diffusion, achieving superior performance in generating images and text with models up to 7B parameters. https://arxiv.org/abs//2408.11039 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/…
…
continue reading
In this episode, Chris interviews Michelle, the Chief Marketing Officer of MindTrip, a new AI-powered travel platform designed to transform the travel planning experience. Michelle shares her extensive background as a startup executive and advisor, as well as insights from her upcoming book, "Grow Up: Take Your Startup to the Next Level," which foc…
…
continue reading
In this episode of thinkfuture, Chris broadcasts from the heart of Evia, Greece, and dives into the concept of JOMO—Joy of Missing Out—as a counter to the pervasive fear of missing out (FOMO) that has gripped modern society. Surrounded by the birthplace of democracy and drama, Chris reflects on how our constant connection to devices and social medi…
…
continue reading
1
E110 - Omarion: Redefining Spiritual Growth, Betrayal, Celibacy & Becoming Unbothered
1:31:08
1:31:08
Play later
Play later
Lists
Like
Liked
1:31:08
Grammy-nominated R&B artist Omarion opens up about the deeply personal experiences that have shaped his spiritual and creative growth. From navigating betrayal and redefining joy, to discovering the healing power of music and meditation, Omarion's story is one of unwavering resilience and self-discovery. Raised by a young single mother, Omarion sha…
…
continue reading
1
[QA] Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models
8:37
8:37
Play later
Play later
Lists
Like
Liked
8:37
The paper presents MOHAWK, a method for distilling Transformers into state space models, achieving strong performance with significantly less training data and computational resources. https://arxiv.org/abs//2408.10189 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
…
continue reading
1
Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models
31:52
31:52
Play later
Play later
Lists
Like
Liked
31:52
The paper presents MOHAWK, a method for distilling Transformers into state space models, achieving strong performance with significantly less training data and computational resources. https://arxiv.org/abs//2408.10189 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.c…
…
continue reading
1
Episode 17 | CX Leaders: How to Help Sales Close Larger Deals?
28:34
28:34
Play later
Play later
Lists
Like
Liked
28:34
Neelam Sandhu, Former Chief Elite Customer Officer, Chief Marketing Officer, & SVP Sustainability, shares insights on transforming sales strategies to focus on customer success. She discusses aligning sales with customer-centric approaches, the impact of technology on relationships, and key strategies for long-term engagement. Prioritizing customer…
…
continue reading
1
Sir Martin Sorrell: UM’s ex-privacy boss Arielle Garcia ‘is right’ (partly) on $700bn online data ‘garbage'; Personalisation Netflix-style the future; AI, big tech will crunch intermediaries in three years ...
40:39
40:39
Play later
Play later
Lists
Like
Liked
40:39
Part Two: After last week's instalment with S4 Capital's founder and former WPP boss, Sir Martin Sorrell – in which he explained why the market cap of his next generation marketing services firm had plummeted from £5 billion to £300 million in the past three years – he's back for part two. We cover the consolidation of the $700 billion global digit…
…
continue reading
1
[QA] JPEG-LM: LLMs as Image Generators with Canonical Codec Representations
7:47
7:47
Play later
Play later
Lists
Like
Liked
7:47
This paper proposes using canonical codecs for image and video generation in autoregressive models, demonstrating improved efficiency and effectiveness over traditional pixel-based and vector quantization methods. https://arxiv.org/abs//2408.08459 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
…
continue reading
1
JPEG-LM: LLMs as Image Generators with Canonical Codec Representations
20:04
20:04
Play later
Play later
Lists
Like
Liked
20:04
This paper proposes using canonical codecs for image and video generation in autoregressive models, demonstrating improved efficiency and effectiveness over traditional pixel-based and vector quantization methods. https://arxiv.org/abs//2408.08459 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
…
continue reading