This is an alternate universe story, where Petunia married a scientist. Harry enters the wizarding world armed with Enlightenment ideals and the experimental spirit.
…
continue reading
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
AI safety, philosophy and other things.
…
continue reading
Join Steven and Brian as we dive into the world of Harry Potter and the Methods of Rationality! Steven will play the role of the tour guide while doing his best to not spoil any of the surprises and Brian will play the seasoned adventurer who is new to this particular work.
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
…
continue reading
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Universal Basic Income and Poverty” by Eliezer Yudkowsky
15:54
15:54
Play later
Play later
Lists
Like
Liked
15:54
(Crossposted from Twitter) I'm skeptical that Universal Basic Income can get rid of grinding poverty, since somehow humanity's 100-fold productivity increase (since the days of agriculture) didn't eliminate poverty. Some of my friends reply, "What do you mean, poverty is still around? 'Poor' people today, in Western countries, have a lot to legitim…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“‘AI achieves silver-medal standard solving International Mathematical Olympiad problems’” by gjm
4:00
4:00
Play later
Play later
Lists
Like
Liked
4:00
This is a link post.Google DeepMind reports on a system for solving mathematical problems that allegedly is able to give complete solutions to four of the six problems on the 2024 IMO, putting it near the top of the silver-medal category. Well, actually, two systems for solving mathematical problems: AlphaProof, which is more general-purpose, and A…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Decomposing Agency — capabilities without desires” by owencb, Raymond D
24:12
24:12
Play later
Play later
Lists
Like
Liked
24:12
This is a link post.What is an agent? It's a slippery concept with no commonly accepted formal definition, but informally the concept seems to be useful. One angle on it is Dennett's Intentional Stance: we think of an entity as being an agent if we can more easily predict it by treating it as having some beliefs and desires which guide its actions.…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon
12:49
12:49
Play later
Play later
Lists
Like
Liked
12:49
Eliezer Yudkowsky periodically complains about people coming up with questionable plans with questionable assumptions to deal with AI, and then either: Saying "well, if this assumption doesn't hold, we're doomed, so we might as well assume it's true." Worse: coming up with cope-y reasons to assume that the assumption isn't even questionable at all.…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
Fallout 09 – Uranium Fever Aftermath
1:22:01
1:22:01
Play later
Play later
Lists
Like
Liked
1:22:01
Buckle up as Eneasz and Steven discuss some of the feedback about this short series and then have an untamed discussion about how capitalism/capitalists are portrayed in popular culture full of all kinds of digressions and random thoughts. We hope you enjoy it! :)By The Methods of Rationality Podcast
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Superbabies: Putting The Pieces Together” by sarahconstantin
19:09
19:09
Play later
Play later
Lists
Like
Liked
19:09
This post was inspired by some talks at the recent LessOnline conference including one by LessWrong user “Gene Smith”. Let's say you want to have a “designer baby”. Genetically extraordinary in some way — super athletic, super beautiful, whatever. 6’5”, blue eyes, with a trust fund. Ethics aside[1], what would be necessary to actually do this? Fund…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Poker is a bad game for teaching epistemics. Figgie is a better one.” by rossry
18:03
18:03
Play later
Play later
Lists
Like
Liked
18:03
This is a link post.Editor's note: Somewhat after I posted this on my own blog, Max Chiswick cornered me at LessOnline / Manifest and gave me a whole new perspective on this topic. I now believe that there is a way to use poker to sharpen epistemics that works dramatically better than anything I had been considering. I hope to write it up—together …
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Reliable Sources: The Story of David Gerard” by TracingWoodgrains
1:22:25
1:22:25
Play later
Play later
Lists
Like
Liked
1:22:25
This is a linkpost for https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin, posted in full here given its relevance to this community. Gerard has been one of the longest-standing malicious critics of the rationalist and EA communities and has done remarkable amounts of work to shape their public images behind the scenes. Note: …
…
continue reading
xlr8harder writes: In general I don’t think an uploaded mind is you, but rather a copy. But one thought experiment makes me question this. A Ship of Theseus concept where individual neurons are replaced one at a time with a nanotechnological functional equivalent. Are you still you? Presumably the question xlr8harder cares about here isn't semantic…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“80,000 hours should remove OpenAI from the Job Board (and similar orgs should do similarly)” by Raemon
12:42
12:42
Play later
Play later
Lists
Like
Liked
12:42
I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public. I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement serv…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
[Linkpost] “introduction to cancer vaccines” by bhauth
12:01
12:01
Play later
Play later
Lists
Like
Liked
12:01
This is a linkpost for https://www.bhauth.com/blog/biology/cancer%20vaccines.html cancer neoantigens For cells to become cancerous, they must have mutations that cause uncontrolled replication and mutations that prevent that uncontrolled replication from causing apoptosis. Because cancer requires several mutations, it often begins with damage to mu…
…
continue reading
I Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans. Like the EA movement of today, they believe in doing as much good as possible, what…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“My experience using financial commitments to overcome akrasia” by William Howard
29:40
29:40
Play later
Play later
Lists
Like
Liked
29:40
About a year ago I decided to try using one of those apps where you tie your goals to some kind of financial penalty. The specific one I tried is Forfeit, which I liked the look of because it's relatively simple, you set single tasks which you have to verify you have completed with a photo. I’m generally pretty sceptical of productivity systems, to…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“The Incredible Fentanyl-Detecting Machine” by sarahconstantin
14:21
14:21
Play later
Play later
Lists
Like
Liked
14:21
An NII machine in Nogales, AZ. (Image source)There's bound to be a lot of discussion of the Biden-Trump presidential debates last night, but I want to skip all the political prognostication and talk about the real issue: fentanyl-detecting machines. Joe Biden says: And I wanted to make sure we use the machinery that can detect fentanyl, these big m…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“AI catastrophes and rogue deployments” by Buck
14:46
14:46
Play later
Play later
Lists
Like
Liked
14:46
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.[Thanks to Aryan Bhatt, Ansh Radhakrishnan, Adam Kaufman, Vivek Hebbar, Hanna Gabor, Justis Mills, Aaron Scher, Max Nadeau, Ryan Greenblatt, Peter Barnett, Fabien Roger, and various people at a presentation of these arguments for comments. These ideas aren’t very …
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Loving a world you don’t trust” by Joe Carlsmith
1:03:54
1:03:54
Play later
Play later
Lists
Like
Liked
1:03:54
(Cross-posted from my website. Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) This is the final essay in a series that I'm calling "Otherness andcontrol in the age of AGI." I'm hoping that the individual essays can beread fairly well on their own, butsee here fora brief summary of the series as a whole. There's also a…
…
continue reading
Steven and Eneasz wrap up their discussion on the Fallout TV show and plan out future content.By The Methods of Rationality Podcast
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Formal verification, heuristic explanations and surprise accounting” by paulfchristiano
17:07
17:07
Play later
Play later
Lists
Like
Liked
17:07
ARC's current research focus can be thought of as trying to combine mechanistic interpretability and formal verification. If we had a deep understanding of what was going on inside a neural network, we would hope to be able to use that understanding to verify that the network was not going to behave dangerously in unforeseen situations. ARC is atte…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“LLM Generality is a Timeline Crux” by eggsyntax
13:08
13:08
Play later
Play later
Lists
Like
Liked
13:08
Summary Summary . LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible. Longer summary There is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduct…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“SAE feature geometry is outside the superposition hypothesis” by jake_mendel
18:29
18:29
Play later
Play later
Lists
Like
Liked
18:29
Summary: Superposition-based interpretations of neural network activation spaces are incomplete. The specific locations of feature vectors contain crucial structural information beyond superposition, as seen in circular arrangements of day-of-the-week features and in the rich structures. We don’t currently have good concepts for talking about this …
…
continue reading
Enjoy the 7th episode of Uranium Fever! Everyone’s always trying to get some head. In this case it’s more literal.By The Methods of Rationality Podcast
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data” by Johannes Treutlein, Owain_Evans
17:56
17:56
Play later
Play later
Lists
Like
Liked
17:56
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.TL;DR: We published a new paper on out-of-context reasoning in LLMs. We show that LLMs can infer latent information from training data and use this information for downstream tasks, without any in-context learning or CoT. For instance, we finet…
…
continue reading
This is a link post.I have canceled my OpenAI subscription in protest over OpenAI's lack ofethics. In particular, I object to: threats to confiscate departing employees' equity unless thoseemployees signed a life-long non-disparagement contract Sam Altman's pattern of lying about important topics I'm trying to hold AI companies to higher standards …
…
continue reading
![Artwork](/static/images/128pixel.png)
1
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk
1:16:34
1:16:34
Play later
Play later
Lists
Like
Liked
1:16:34
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the world's largest survey of machine learning researchers. We talked about the most interesting results from the survey, Katja's views on whether we should slow down AI progr…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Sycophancy to subterfuge: Investigating reward tampering in large language models” by evhub, Carson Denison
15:37
15:37
Play later
Play later
Lists
Like
Liked
15:37
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.New Anthropic model organisms research paper led by Carson Denison from the Alignment Stress-Testing Team demonstrating that large language models can generalize zero-shot from simple reward-hacks (sycophancy) to more complex reward tampering (…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“I would have shit in that alley, too” by Declan Molony
7:28
7:28
Play later
Play later
Lists
Like
Liked
7:28
After living in a suburb for most of my life, when I moved to a major U.S. city the first thing I noticed was the feces. At first I assumed it was dog poop, but my naivety didn’t last long. One day I saw a homeless man waddling towards me at a fast speed while holding his ass cheeks. He turned into an alley and took a shit. As I passed him, there w…
…
continue reading
![Artwork](/static/images/128pixel.png)
1
“Getting 50% (SoTA) on ARC-AGI with GPT-4o” by ryan_greenblatt
35:25
35:25
Play later
Play later
Lists
Like
Liked
35:25
ARC-AGI post Getting 50% (SoTA) on ARC-AGI with GPT-4o I recently got to 50%[1] accuracy on the public test set for ARC-AGI by having GPT-4o generate a huge number of Python implementations of the transformation rule (around 8,000 per problem) and then selecting among these implementations based on correctness of the Python programs on the examples…
…
continue reading