<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><description>AI safety at Anthropic, on leave from a faculty job at NYU.&#xA;Views not employers&#39;.&#xA;I think you should join Giving What We Can.&#xA;cims.nyu.edu/~sbowman</description><link>https://bsky.app/profile/sleepinyourhat.bsky.social</link><title>@sleepinyourhat.bsky.social - Sam Bowman</title><item><link>https://bsky.app/profile/sleepinyourhat.bsky.social/post/3ldlw22eto22r</link><description>New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:</description><pubDate>18 Dec 2024 17:46 +0000</pubDate><guid isPermaLink="false">at://did:plc:dsxewietk5tigqvn6daod2l6/app.bsky.feed.post/3ldlw22eto22r</guid></item><item><link>https://bsky.app/profile/sleepinyourhat.bsky.social/post/3lcdxpah6ak2l</link><description>If you&#39;re potentially interested in transitioning into AI safety research, come collaborate with my team at Anthropic!&#xA;&#xA;Funded fellows program for researchers new to the field here: https://alignment.anthropic.com/2024/anthropic-fellows-program/</description><pubDate>02 Dec 2024 20:30 +0000</pubDate><guid isPermaLink="false">at://did:plc:dsxewietk5tigqvn6daod2l6/app.bsky.feed.post/3lcdxpah6ak2l</guid></item></channel></rss>