show episodes
 
Artwork

1
USC Territory

USC • United Studios Corporation

Unsubscribe
Unsubscribe
Monthly
 
USC Territory (Территория USC) is a themed podcast of USC label at Sector – the first noctidial lossless internet radio station. Listen to new episodes on the Progressive channel on Saturdays at 21:00 MSK (UTC+3). Show is published under Creative Commons Attribution-ShareAlike 4.0 International license. • usct.bandcamp.com • www.sectorradio.com • www.unitedstudios.ru
  continue reading
 
Artwork

1
greytFM ‒ a podcast series by greytHR

greytFM ‒ a podcast series by greytHR

Unsubscribe
Unsubscribe
Monthly
 
greytFM ‒ a podcast series by greytHR Hello and welcome. In this podcast series, we'll be spotlighting interesting dialogues and conversations around trending themes from the world of human resources. Tune in now to get our speakers' perspectives on varied topics from workplace diversity and HR tech trends to people analytics and the Great Resignation, to name just a few. What's more? You'll take away lots of practical pointers to deal with your most pressing HR challenges. Don't forget to b ...
  continue reading
 
Running out of time to catch up with new arXiv papers? We take the most impactful papers and present them as convenient podcasts. If you're a visual learner, we offer these papers in an engaging video format. Our service fills the gap between overly brief paper summaries and time-consuming full paper reads. You gain academic insights in a time-efficient, digestible format. Code behind this work: https://github.com/imelnyk/ArxivPapers Support this podcast: https://podcasters.spotify.com/pod/s ...
  continue reading
 
Artwork

1
Podcast Series

100 % Quality Artists, Music & Interviews

Unsubscribe
Unsubscribe
Monthly
 
Welcome to Decisive, a passion project by Roberto Ingram, where we celebrate open-minded music enthusiasts. Embodying authenticity and unwavering quality, our approach ensures your active participation in the excitement. Roberto is dedicated to crafting captivating podcast experiences within the realm of independent music and interviews. His unrelenting commitment to our vibrant community's satisfaction is paramount. If you resonate with the carefully curated content of Roberto's podcast ser ...
  continue reading
 
Loading …
show series
 
Kangaroo introduces a self-speculative decoding framework for accelerating large language model inference, using a shallow sub-network and early exiting mechanisms to improve efficiency. https://arxiv.org/abs//2404.18911 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple…
  continue reading
 
The paper introduces Consistent Self-Attention and Semantic Motion Predictor to enhance content consistency in diffusion-based generative models for text-to-image and video generation, enabling rich visual story creation. https://arxiv.org/abs//2405.01434 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers App…
  continue reading
 
The paper introduces Consistent Self-Attention and Semantic Motion Predictor to enhance content consistency in diffusion-based generative models for text-to-image and video generation, enabling rich visual story creation. https://arxiv.org/abs//2405.01434 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers App…
  continue reading
 
The paper explores in-context learning (ICL) at extreme scales, showing performance improvements with hundreds or thousands of demonstrations, contrasting with example retrieval and finetuning. https://arxiv.org/abs//2405.00200 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcast…
  continue reading
 
The paper explores in-context learning (ICL) at extreme scales, showing performance improvements with hundreds or thousands of demonstrations, contrasting with example retrieval and finetuning. https://arxiv.org/abs//2405.00200 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcast…
  continue reading
 
WILDCHAT is a diverse dataset of 1 million user-ChatGPT conversations, offering rich insights into chatbot interactions and potential toxic use-cases for researchers. https://arxiv.org/abs//2405.01470 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxi…
  continue reading
 
WILDCHAT is a diverse dataset of 1 million user-ChatGPT conversations, offering rich insights into chatbot interactions and potential toxic use-cases for researchers. https://arxiv.org/abs//2405.01470 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxi…
  continue reading
 
NeMo-Aligner is a scalable toolkit for aligning Large Language Models with human values, supporting various alignment paradigms and designed for extensibility. https://arxiv.org/abs//2405.01481 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-paper…
  continue reading
 
NeMo-Aligner is a scalable toolkit for aligning Large Language Models with human values, supporting various alignment paradigms and designed for extensibility. https://arxiv.org/abs//2405.01481 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-paper…
  continue reading
 
Prometheus 2 is an open-source LM designed for evaluating responses, outperforming existing models in correlation with human and proprietary LM judgments. https://arxiv.org/abs//2405.01535 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1…
  continue reading
 
Prometheus 2 is an open-source LM designed for evaluating responses, outperforming existing models in correlation with human and proprietary LM judgments. https://arxiv.org/abs//2405.01535 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1…
  continue reading
 
Study investigates dataset contamination in large language models for mathematical reasoning using Grade School Math 1000 benchmark, finding evidence of overfitting and potential memorization of benchmark questions. https://arxiv.org/abs//2405.00332 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
Study investigates dataset contamination in large language models for mathematical reasoning using Grade School Math 1000 benchmark, finding evidence of overfitting and potential memorization of benchmark questions. https://arxiv.org/abs//2405.00332 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
The paper introduces SPPO, a self-play method for language model alignment, achieving state-of-the-art results without external supervision, outperforming DPO and IPO on various benchmarks. https://arxiv.org/abs//2405.00675 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.ap…
  continue reading
 
The paper introduces SPPO, a self-play method for language model alignment, achieving state-of-the-art results without external supervision, outperforming DPO and IPO on various benchmarks. https://arxiv.org/abs//2405.00675 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.ap…
  continue reading
 
Study evaluates model editing techniques on Llama-3, finding sequential editing more effective than batch editing. Suggests combining both methods for optimal performance. https://arxiv.org/abs//2405.00664 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast…
  continue reading
 
Study evaluates model editing techniques on Llama-3, finding sequential editing more effective than batch editing. Suggests combining both methods for optimal performance. https://arxiv.org/abs//2405.00664 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast…
  continue reading
 
Iterative preference optimization method enhances reasoning tasks by optimizing preference between generated Chain-of-Thought candidates, leading to improved accuracy on various datasets without additional sourcing. https://arxiv.org/abs//2404.19733 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
Iterative preference optimization method enhances reasoning tasks by optimizing preference between generated Chain-of-Thought candidates, leading to improved accuracy on various datasets without additional sourcing. https://arxiv.org/abs//2404.19733 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Pod…
  continue reading
 
https://arxiv.org/abs//2404.19708 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.19708 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
Training language models to predict multiple future tokens at once improves sample efficiency, downstream capabilities, and inference speed without increasing training time, especially beneficial for larger models and generative tasks. https://arxiv.org/abs//2404.19737 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
  continue reading
 
Training language models to predict multiple future tokens at once improves sample efficiency, downstream capabilities, and inference speed without increasing training time, especially beneficial for larger models and generative tasks. https://arxiv.org/abs//2404.19737 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@ar…
  continue reading
 
https://arxiv.org/abs//2404.18796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.18796 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
The paper introduces Stylus, a method for efficiently selecting and composing task-specific adapters based on prompts' keywords, achieving high-quality image generation with improved efficiency and performance gains. https://arxiv.org/abs//2404.18928 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
The paper introduces Stylus, a method for efficiently selecting and composing task-specific adapters based on prompts' keywords, achieving high-quality image generation with improved efficiency and performance gains. https://arxiv.org/abs//2404.18928 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Po…
  continue reading
 
Introducing Reinforced Token Optimization (RTO) framework for Reinforcement Learning from Human Feedback (RLHF) using Markov decision process (MDP) to improve token-wise reward learning and policy optimization. https://arxiv.org/abs//2404.18922 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
Introducing Reinforced Token Optimization (RTO) framework for Reinforcement Learning from Human Feedback (RLHF) using Markov decision process (MDP) to improve token-wise reward learning and policy optimization. https://arxiv.org/abs//2404.18922 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
Kangaroo introduces a self-speculative decoding framework for accelerating large language model inference, using a shallow sub-network and early exiting mechanisms to improve efficiency. https://arxiv.org/abs//2404.18911 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple…
  continue reading
 
https://arxiv.org/abs//2404.16873 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.16873 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
Study explores if large language models understand their own language. Greedy Coordinate Gradient optimizer crafts prompts to compel coherent responses from nonsensical inputs, revealing efficiency and robustness differences. https://arxiv.org/abs//2404.17120 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers…
  continue reading
 
Study explores if large language models understand their own language. Greedy Coordinate Gradient optimizer crafts prompts to compel coherent responses from nonsensical inputs, revealing efficiency and robustness differences. https://arxiv.org/abs//2404.17120 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers…
  continue reading
 
Transformers can use meaningless filler tokens to solve tasks, but learning to use them is challenging. Additional tokens can provide computational benefits independently of token choice. https://arxiv.org/abs//2404.15758 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
Transformers can use meaningless filler tokens to solve tasks, but learning to use them is challenging. Additional tokens can provide computational benefits independently of token choice. https://arxiv.org/abs//2404.15758 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
This paper investigates how transformer-based language models retrieve information from long contexts, identifying special attention heads called retrieval heads as crucial for this task. https://arxiv.org/abs//2404.15574 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
This paper investigates how transformer-based language models retrieve information from long contexts, identifying special attention heads called retrieval heads as crucial for this task. https://arxiv.org/abs//2404.15574 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.appl…
  continue reading
 
AUTOCRAWLER combines large language models with crawlers to efficiently handle diverse web environments, improving adaptability and scalability compared to traditional methods. https://arxiv.org/abs//2404.12753 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/po…
  continue reading
 
AUTOCRAWLER combines large language models with crawlers to efficiently handle diverse web environments, improving adaptability and scalability compared to traditional methods. https://arxiv.org/abs//2404.12753 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/po…
  continue reading
 
https://arxiv.org/abs//2404.16820 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.16820 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.16710 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
https://arxiv.org/abs//2404.16710 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016 Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers --- Support this podcast: https://podcasters.spotify.com/pod/show/arxiv-papers/supp…
  continue reading
 
IN2 training addresses lost-in-the-middle challenge in large language models by emphasizing information utilization in long contexts, leading to improved performance on various tasks. https://arxiv.org/abs//2404.16811 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.co…
  continue reading
 
IN2 training addresses lost-in-the-middle challenge in large language models by emphasizing information utilization in long contexts, leading to improved performance on various tasks. https://arxiv.org/abs//2404.16811 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.co…
  continue reading
 
The paper explores inductive bias in transformer models, showing language modeling training leads to hierarchical generalization, supported by pruning experiments and Bayesian analysis. https://arxiv.org/abs//2404.16367 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
  continue reading
 
The paper explores inductive bias in transformer models, showing language modeling training leads to hierarchical generalization, supported by pruning experiments and Bayesian analysis. https://arxiv.org/abs//2404.16367 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts: https://podcasts.apple.…
  continue reading
 
AutoGluon-Multimodal (AutoMM) is an open-source AutoML library for multimodal learning, offering easy fine-tuning with three lines of code. It supports various modalities and excels in basic and advanced tasks. https://arxiv.org/abs//2404.16233 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
AutoGluon-Multimodal (AutoMM) is an open-source AutoML library for multimodal learning, offering easy fine-tuning with three lines of code. It supports various modalities and excels in basic and advanced tasks. https://arxiv.org/abs//2404.16233 YouTube: https://www.youtube.com/@ArxivPapers TikTok: https://www.tiktok.com/@arxiv_papers Apple Podcasts…
  continue reading
 
Loading …

Quick Reference Guide