Lonely Wrist dives deep into the intricate world of watches, unearthing the stories, craft, and passion behind every ticking piece. From timeless classics to modern marvels, this podcast winds through the history, mechanics, and cultural significance of timepieces. Whether you're an avid horologist or just someone who admires the beauty of a well-crafted watch, Lonely Wrist offers a unique perspective, uniting enthusiasts and curious minds. Join us every episode as we explore the art of watc ...
…
continue reading
Be you. Be authentic. Live your life 'unafraid'. Faith, family, fatherhood, food and football; and anything else we feel like talking about. We'll have guests, special edition segments, cooking, eating, chatting, and some debating. We are here to build a place where authenticity lives. Where you feel able to be yourself, be true to yourself, and always live life 'unafraid'.READ MORE Watch on YouTube: youtube.com/@unafraidshow
…
continue reading
Three guys from Southern Cal sit down watch a film and discuss. The films are a wide range that includes most genres.
…
continue reading
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/s ...
…
continue reading
1
[QA] Emu3: Next-Token Prediction is All You Need
7:43
7:43
Play later
Play later
Lists
Like
Liked
7:43
Emu3 introduces a next-token prediction model for multimodal tasks, outperforming existing models and simplifying design by focusing on tokenization of images, text, and videos. https://arxiv.org/abs//2409.18869 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/p…
…
continue reading
1
Emu3: Next-Token Prediction is All You Need
17:28
17:28
Play later
Play later
Lists
Like
Liked
17:28
Emu3 introduces a next-token prediction model for multimodal tasks, outperforming existing models and simplifying design by focusing on tokenization of images, text, and videos. https://arxiv.org/abs//2409.18869 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/p…
…
continue reading
1
[QA] MIO: A Foundation Model on Multimodal Tokens
8:38
8:38
Play later
Play later
Lists
Like
Liked
8:38
MIO is a novel multimodal foundation model that excels in understanding and generating speech, text, images, and videos, outperforming existing models in any-to-any capabilities and diverse tasks. https://arxiv.org/abs//2409.17692 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
…
continue reading
1
MIO: A Foundation Model on Multimodal Tokens
19:09
19:09
Play later
Play later
Lists
Like
Liked
19:09
MIO is a novel multimodal foundation model that excels in understanding and generating speech, text, images, and videos, outperforming existing models in any-to-any capabilities and diverse tasks. https://arxiv.org/abs//2409.17692 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podc…
…
continue reading
1
[QA] A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor ?
7:52
7:52
Play later
Play later
Lists
Like
Liked
7:52
The paper evaluates OpenAI's o1 model in medical scenarios, highlighting its enhanced reasoning and accuracy over GPT-4, while also identifying weaknesses and releasing data for further research. https://arxiv.org/abs//2409.15277 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
…
continue reading
1
A Preliminary Study of o1 in Medicine: Are We Closer to an AI Doctor ?
8:52
8:52
Play later
Play later
Lists
Like
Liked
8:52
The paper evaluates OpenAI's o1 model in medical scenarios, highlighting its enhanced reasoning and accuracy over GPT-4, while also identifying weaknesses and releasing data for further research. https://arxiv.org/abs//2409.15277 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podca…
…
continue reading
1
[QA] Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
8:44
8:44
Play later
Play later
Lists
Like
Liked
8:44
The Logic-of-Thought (LoT) prompting method enhances logical reasoning in Large Language Models by integrating propositional logic, significantly improving performance across various reasoning tasks. https://arxiv.org/abs//2409.17539 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://p…
…
continue reading
1
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
16:05
16:05
Play later
Play later
Lists
Like
Liked
16:05
The Logic-of-Thought (LoT) prompting method enhances logical reasoning in Large Language Models by integrating propositional logic, significantly improving performance across various reasoning tasks. https://arxiv.org/abs//2409.17539 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://p…
…
continue reading
1
[QA] Making Text Embedders Few-Shot Learners
7:45
7:45
Play later
Play later
Lists
Like
Liked
7:45
We propose bge-en-icl, a model leveraging in-context learning in LLMs for high-quality text embeddings, achieving state-of-the-art performance on MTEB and AIR-Bench benchmarks. https://arxiv.org/abs//2409.15700 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/po…
…
continue reading
We propose bge-en-icl, a model leveraging in-context learning in LLMs for high-quality text embeddings, achieving state-of-the-art performance on MTEB and AIR-Bench benchmarks. https://arxiv.org/abs//2409.15700 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/po…
…
continue reading
1
[QA] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale
6:45
6:45
Play later
Play later
Lists
Like
Liked
6:45
The paper introduces PROX, a framework enabling small language models to refine data effectively, outperforming human-crafted methods and enhancing efficiency in LLM pre-training across various benchmarks. https://arxiv.org/abs//2409.17115 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale
8:57
8:57
Play later
Play later
Lists
Like
Liked
8:57
The paper introduces PROX, a framework enabling small language models to refine data effectively, outperforming human-crafted methods and enhancing efficiency in LLM pre-training across various benchmarks. https://arxiv.org/abs//2409.17115 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
[QA] Infer Human's Intentions Before Following Natural Language Instruction
8:18
8:18
Play later
Play later
Lists
Like
Liked
8:18
The FISER framework enhances AI's ability to follow ambiguous human instructions by inferring intentions, outperforming traditional methods in collaborative tasks, particularly on the HandMeThat benchmark. https://arxiv.org/abs//2409.18073 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
Infer Human's Intentions Before Following Natural Language Instruction
27:36
27:36
Play later
Play later
Lists
Like
Liked
27:36
The FISER framework enhances AI's ability to follow ambiguous human instructions by inferring intentions, outperforming traditional methods in collaborative tasks, particularly on the HandMeThat benchmark. https://arxiv.org/abs//2409.18073 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
[QA] MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
7:05
7:05
Play later
Play later
Lists
Like
Liked
7:05
This paper presents a learnable pruning method for Large Language Models, achieving efficient N:M sparsity, improved mask quality, and transferability across tasks, outperforming existing techniques in empirical evaluations. https://arxiv.org/abs//2409.17481 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers …
…
continue reading
1
MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
15:10
15:10
Play later
Play later
Lists
Like
Liked
15:10
This paper presents a learnable pruning method for Large Language Models, achieving efficient N:M sparsity, improved mask quality, and transferability across tasks, outperforming existing techniques in empirical evaluations. https://arxiv.org/abs//2409.17481 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers …
…
continue reading
1
[QA] Counterfactual Token Generation in Large Language Models
7:53
7:53
Play later
Play later
Lists
Like
Liked
7:53
This paper presents a method to enable large language models to perform counterfactual token generation, enhancing their capabilities without fine-tuning, and applying it for bias detection. https://arxiv.org/abs//2409.17027 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
…
continue reading
1
Counterfactual Token Generation in Large Language Models
14:52
14:52
Play later
Play later
Lists
Like
Liked
14:52
This paper presents a method to enable large language models to perform counterfactual token generation, enhancing their capabilities without fine-tuning, and applying it for bias detection. https://arxiv.org/abs//2409.17027 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.a…
…
continue reading
1
[QA] Characterizing stable regions in the residual stream of LLMs
7:45
7:45
Play later
Play later
Lists
Like
Liked
7:45
The paper identifies stable regions in Transformers' residual streams, showing insensitivity to small changes but high sensitivity at boundaries, aligning with semantic distinctions and clustering similar prompts. https://arxiv.org/abs//2409.17113 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
…
continue reading
1
Characterizing stable regions in the residual stream of LLMs
5:26
5:26
Play later
Play later
Lists
Like
Liked
5:26
The paper identifies stable regions in Transformers' residual streams, showing insensitivity to small changes but high sensitivity at boundaries, aligning with semantic distinctions and clustering similar prompts. https://arxiv.org/abs//2409.17113 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podca…
…
continue reading
1
[QA] Watch Your Steps: Observable and Modular Chains of Thought
7:30
7:30
Play later
Play later
Lists
Like
Liked
7:30
We introduce Program Trace Prompting, enhancing chain of thought explanations with formal syntax, improving observability, and enabling analysis of reasoning errors across diverse tasks in the BIG-Bench Hard benchmark. https://arxiv.org/abs//2409.15359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple …
…
continue reading
1
Watch Your Steps: Observable and Modular Chains of Thought
29:35
29:35
Play later
Play later
Lists
Like
Liked
29:35
We introduce Program Trace Prompting, enhancing chain of thought explanations with formal syntax, improving observability, and enabling analysis of reasoning errors across diverse tasks in the BIG-Bench Hard benchmark. https://arxiv.org/abs//2409.15359 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple …
…
continue reading
1
[QA] Seeing Faces in Things: A Model and Dataset for Pareidolia
7:38
7:38
Play later
Play later
Lists
Like
Liked
7:38
This paper explores face pareidolia in computer vision, presenting a dataset of annotated images and analyzing the differences in face detection between humans and machines. https://arxiv.org/abs//2409.16143 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podca…
…
continue reading
1
Seeing Faces in Things: A Model and Dataset for Pareidolia
10:54
10:54
Play later
Play later
Lists
Like
Liked
10:54
This paper explores face pareidolia in computer vision, presenting a dataset of annotated images and analyzing the differences in face detection between humans and machines. https://arxiv.org/abs//2409.16143 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podca…
…
continue reading
1
[QA] Rule Extrapolation in Language Models: A Study of Compositional Generalization on OOD Prompts
8:20
8:20
Play later
Play later
Lists
Like
Liked
8:20
The paper investigates out-of-distribution behavior in autoregressive LLMs through rule extrapolation in formal languages, analyzing various architectures and proposing a normative theory inspired by algorithmic information theory. https://arxiv.org/abs//2409.13728 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_…
…
continue reading
1
Rule Extrapolation in Language Models: A Study of Compositional Generalization on OOD Prompts
29:04
29:04
Play later
Play later
Lists
Like
Liked
29:04
The paper investigates out-of-distribution behavior in autoregressive LLMs through rule extrapolation in formal languages, analyzing various architectures and proposing a normative theory inspired by algorithmic information theory. https://arxiv.org/abs//2409.13728 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_…
…
continue reading
1
[QA] Style over Substance: Failure Modes of LLM Judges in Alignment Benchmarking
7:40
7:40
Play later
Play later
Lists
Like
Liked
7:40
This study evaluates the effectiveness of LLM-judge preferences in improving alignment, finding no correlation with concrete metrics and highlighting biases in LLM judgments. https://arxiv.org/abs//2409.15268 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podc…
…
continue reading
1
Style over Substance: Failure Modes of LLM Judges in Alignment Benchmarking
11:39
11:39
Play later
Play later
Lists
Like
Liked
11:39
This study evaluates the effectiveness of LLM-judge preferences in improving alignment, finding no correlation with concrete metrics and highlighting biases in LLM judgments. https://arxiv.org/abs//2409.15268 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podc…
…
continue reading
1
[QA] LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models
7:46
7:46
Play later
Play later
Lists
Like
Liked
7:46
This paper introduces LLM Surgery, a framework for efficiently modifying large language models to unlearn outdated information and integrate new knowledge without complete retraining, demonstrating significant performance improvements. https://arxiv.org/abs//2409.13054 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
…
continue reading
1
LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models
13:56
13:56
Play later
Play later
Lists
Like
Liked
13:56
This paper introduces LLM Surgery, a framework for efficiently modifying large language models to unlearn outdated information and integrate new knowledge without complete retraining, demonstrating significant performance improvements. https://arxiv.org/abs//2409.13054 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
…
continue reading
1
[QA] Embedding Geometries of Contrastive Language-Image Pre-Training
7:35
7:35
Play later
Play later
Lists
Like
Liked
7:35
This paper explores alternative geometries and softmax logits for language-image pre-training, finding that Euclidean CLIP (EuCLIP) performs as well as or better than the original CLIP. https://arxiv.org/abs//2409.13079 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
…
continue reading
1
Embedding Geometries of Contrastive Language-Image Pre-Training
15:25
15:25
Play later
Play later
Lists
Like
Liked
15:25
This paper explores alternative geometries and softmax logits for language-image pre-training, finding that Euclidean CLIP (EuCLIP) performs as well as or better than the original CLIP. https://arxiv.org/abs//2409.13079 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
…
continue reading
The Kolmogorov–Arnold Transformer (KAT) enhances transformer performance by replacing MLP layers with Kolmogorov-Arnold Network layers, addressing key challenges and demonstrating superior results in various tasks. https://arxiv.org/abs//2409.10594 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podc…
…
continue reading
The Kolmogorov–Arnold Transformer (KAT) enhances transformer performance by replacing MLP layers with Kolmogorov-Arnold Network layers, addressing key challenges and demonstrating superior results in various tasks. https://arxiv.org/abs//2409.10594 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podc…
…
continue reading
1
Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
11:52
11:52
Play later
Play later
Lists
Like
Liked
11:52
This paper reveals a flaw in the inference pipeline of diffusion models for depth estimation, leading to a 2002#2 speed improvement and superior performance through end-to-end fine-tuning. https://arxiv.org/abs//2409.11355 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.app…
…
continue reading
1
[QA] Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think
6:42
6:42
Play later
Play later
Lists
Like
Liked
6:42
This paper reveals a flaw in the inference pipeline of diffusion models for depth estimation, leading to a 2002#2 speed improvement and superior performance through end-to-end fine-tuning. https://arxiv.org/abs//2409.11355 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.app…
…
continue reading
1
[QA] Re-Introducing LayerNorm: Geometric Meaning, Irreversibility and a Comparative Study with RMSNorm
7:03
7:03
Play later
Play later
Lists
Like
Liked
7:03
This paper explores the geometric implications of LayerNorm in transformers, revealing its irreversibility and redundancy, and advocates for RMSNorm as a more efficient alternative with similar performance. https://arxiv.org/abs//2409.12951 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: ht…
…
continue reading
1
Re-Introducing LayerNorm: Geometric Meaning, Irreversibility and a Comparative Study with RMSNorm
12:28
12:28
Play later
Play later
Lists
Like
Liked
12:28
This paper explores the geometric implications of LayerNorm in transformers, revealing its irreversibility and redundancy, and advocates for RMSNorm as a more efficient alternative with similar performance. https://arxiv.org/abs//2409.12951 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: ht…
…
continue reading
1
[QA] Is Tokenization Needed for Masked Particle Modelling?
7:52
7:52
Play later
Play later
Lists
Like
Liked
7:52
This paper enhances masked particle modeling (MPM) for high-energy physics, improving performance through better implementation and a powerful decoder, outperforming previous methods in various jet physics tasks. https://arxiv.org/abs//2409.12589 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcas…
…
continue reading
1
Is Tokenization Needed for Masked Particle Modelling?
20:39
20:39
Play later
Play later
Lists
Like
Liked
20:39
This paper enhances masked particle modeling (MPM) for high-energy physics, improving performance through better implementation and a powerful decoder, outperforming previous methods in various jet physics tasks. https://arxiv.org/abs//2409.12589 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcas…
…
continue reading
1
[QA] Finetuning Language Models to Emit Linguistic Expressions of Uncertainty
6:49
6:49
Play later
Play later
Lists
Like
Liked
6:49
https://arxiv.org/abs//2409.12180 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
…
continue reading
1
Finetuning Language Models to Emit Linguistic Expressions of Uncertainty
12:41
12:41
Play later
Play later
Lists
Like
Liked
12:41
https://arxiv.org/abs//2409.12180 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
…
continue reading
1
[QA] To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
7:23
7:23
Play later
Play later
Lists
Like
Liked
7:23
Chain-of-thought prompting enhances reasoning in large language models, particularly for math and logic tasks, but shows limited benefits for other tasks, suggesting a need for new computational paradigms. https://arxiv.org/abs//2409.12183 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
26:23
26:23
Play later
Play later
Lists
Like
Liked
26:23
Chain-of-thought prompting enhances reasoning in large language models, particularly for math and logic tasks, but shows limited benefits for other tasks, suggesting a need for new computational paradigms. https://arxiv.org/abs//2409.12183 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: htt…
…
continue reading
1
[QA] On the limits of agency in agent-based models
8:12
8:12
Play later
Play later
Lists
Like
Liked
8:12
AgentTorch is a framework that enhances agent-based modeling by using large language models to simulate millions of agents, demonstrating its utility in analyzing complex systems like the COVID-19 pandemic. https://arxiv.org/abs//2409.10568 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: ht…
…
continue reading
1
On the limits of agency in agent-based models
19:50
19:50
Play later
Play later
Lists
Like
Liked
19:50
AgentTorch is a framework that enhances agent-based modeling by using large language models to simulate millions of agents, demonstrating its utility in analyzing complex systems like the COVID-19 pandemic. https://arxiv.org/abs//2409.10568 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: ht…
…
continue reading
1
[QA] Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
7:14
7:14
Play later
Play later
Lists
Like
Liked
7:14
Promptriever is a novel retrieval model that follows instructions, achieving state-of-the-art performance and improved robustness, demonstrating the potential of prompting in information retrieval. https://arxiv.org/abs//2409.11136 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://pod…
…
continue reading
1
Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models
15:25
15:25
Play later
Play later
Lists
Like
Liked
15:25
Promptriever is a novel retrieval model that follows instructions, achieving state-of-the-art performance and improved robustness, demonstrating the potential of prompting in information retrieval. https://arxiv.org/abs//2409.11136 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://pod…
…
continue reading
1
Building Trust and Innovation in Watchmaking with Mike Pearson of Christopher Ward
1:09:46
1:09:46
Play later
Play later
Lists
Like
Liked
1:09:46
Join us as we welcome back the passionate and knowledgeable Mike Pearson, this time representing Christopher Ward. Discover Mike's transition from Zodiac to Christopher Ward and how his passion for the brand's community-focused approach influenced his career move. Hear firsthand about the brand's journey over the past two decades, their design evol…
…
continue reading
1
[QA] Finetuning CLIP to Reason about Pairwise Differences
7:34
7:34
Play later
Play later
Lists
Like
Liked
7:34
This paper enhances CLIP's contrastive learning by aligning image embeddings with text descriptions, improving image ranking, zero-shot classification, and introducing comparative prompting for better performance and geometric properties. https://arxiv.org/abs//2409.09721 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/…
…
continue reading