<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><description>Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety&#xA;See publications at: https://sites.google.com/view/adriamoret</description><link>https://bsky.app/profile/adriamoret.bsky.social</link><title>@adriamoret.bsky.social - Adrià Moret</title><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3mji7lsgzfc2l</link><description>Going to NYU for my philosophy phd! starting in the fall</description><pubDate>14 Apr 2026 20:24 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3mji7lsgzfc2l</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3meqdfpuwo22v</link><description>Our response to Coglan &amp; Parker&#39;s commentary on our 2025 paper with @petersinger.info, Yip Fai Tse, &amp; @ziesche.bsky.social is out in P&amp;T! &#xA;&#xA;We argue adequate consideration of animals&#39; interests requires advancing beyond basic AI-animal alignment as it becomes feasible and desirable.&#xA;&#xA;Link below.</description><pubDate>13 Feb 2026 10:48 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3meqdfpuwo22v</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3mcxz3xitjc2a</link><description>Great to see Anthropic&#39;s constitution include concern for the welfare of animals, Claude, and other AI systems.&#xA;&#xA;The document is also a great example of the important role philosophy should play in AI alignment.&#xA;&#xA;Hopefully, other leading AI companies follow suit.&#xA;&#xA;www.anthropic.com/constitution</description><pubDate>22 Jan 2026 01:14 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3mcxz3xitjc2a</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3mctzlonyck27</link><description>Check out this great paper on what political liberalism should look like when we take seriously the interests and claims of all sentient beings!&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>20 Jan 2026 11:12 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3mctzlonyck27</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3m62kfdxhmk2g</link><description>Great opportunity to publish in the Journal of Ethics special issue on AI &amp; animals, co-edited by Catia Faria and Yip Fai Tse!&#xA;&#xA;Topics: AI ethics &amp; nonhumans, AI&#39;s impact on animals, future perspectives on AI &amp; animals&#xA;&#xA;Send a 500-word abstract by Dec 22!&#xA;https://link.springer.com/collections/ddhiddbieg</description><pubDate>20 Nov 2025 10:23 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3m62kfdxhmk2g</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3m35sykfhtc24</link><description>Our paper &#34;AI Alignment: The Case for Including Animals&#34; with &#xA;@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy &amp; Technology!&#xA;&#xA;t.co/qBDOZU6ZZy&#xA;https://t.co/qBDOZU6ZZy</description><pubDate>14 Oct 2025 13:19 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3m35sykfhtc24</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lyl6pumqzc2b</link><description>1/ Our paper &#34;AI Alignment: The Case for Including Animals&#34; with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,&#xA;has been accepted for publication at Philosophy &amp; Technology! &#xA;&#xA;We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵</description><pubDate>11 Sep 2025 16:38 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lyl6pumqzc2b</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lyl6pumqzc2b</link><description>1/ Our paper &#34;AI Alignment: The Case for Including Animals&#34; with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,&#xA;has been accepted for publication at Philosophy &amp; Technology! &#xA;&#xA;We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵</description><pubDate>11 Sep 2025 16:38 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lyl6pumqzc2b</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3ly4k7pr25k2m</link><description>Honored to be so well accompanied! Join us just before EAG NY!&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>05 Sep 2025 20:54 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3ly4k7pr25k2m</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lumrcarfo22f</link><description>Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org</description><pubDate>23 Jul 2025 10:03 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lumrcarfo22f</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lumrb4cazs2f</link><description>🎥 Excited to share that the recording of my presentation &#34;AI Welfare Risks&#34; from the AIADM London 2025 Conference is now live!&#xA;&#xA;I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies to reduce welfare risks!&#xA;&#xA;https://www.youtube.com/watch?v=R6w4s3IKmDM</description><pubDate>23 Jul 2025 10:02 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lumrb4cazs2f</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3ltw6pielzk2q</link><description>Participaré en las jornadas &#34;La IA y las fronteras de la consideración moral&#34; | 26-27 sept | USC, Santiago&#xA;&#xA;¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦&#xA;&#xA;¡Se aceptan propuestas de presentaciones!&#xA;&#xA;https://sites.google.com/view/iayconsideracionmoral/cas</description><pubDate>14 Jul 2025 10:31 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3ltw6pielzk2q</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lr5zf7z4bs23</link><description>My paper &#34;AI Welfare Risks&#34; is now available open access at Philosophical Studies! &#xA;&#xA;I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them&#xA;&#xA;https://link.springer.com/article/10.1007/s11098-025-02343-7#:~:text=Drawing%20from%20leading%20philosophical%20theories,to%20train%20and%20align%20them.</description><pubDate>09 Jun 2025 09:02 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lr5zf7z4bs23</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lr5zf7z4bs23</link><description>My paper &#34;AI Welfare Risks&#34; is now available open access at Philosophical Studies! &#xA;&#xA;I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them&#xA;&#xA;https://link.springer.com/article/10.1007/s11098-025-02343-7#:~:text=Drawing%20from%20leading%20philosophical%20theories,to%20train%20and%20align%20them.</description><pubDate>09 Jun 2025 09:02 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lr5zf7z4bs23</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lqxp4xulbk26</link><description>what is art for?&#xA;my girlfriend wrote about it in her new Substack! &#xA;&#xA;https://open.substack.com/pub/nadiaendalu/p/the-useful-uselessness-of-art?r=448jrw&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false</description><pubDate>06 Jun 2025 20:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lqxp4xulbk26</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lqxp4xulbk26</link><description>what is art for?&#xA;my girlfriend wrote about it in her new Substack! &#xA;&#xA;https://open.substack.com/pub/nadiaendalu/p/the-useful-uselessness-of-art?r=448jrw&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false</description><pubDate>06 Jun 2025 20:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lqxp4xulbk26</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lo57nzzlak2j</link><description>What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?  &#xA;&#xA;Join me on this Rethink Priorities webinar (May 28th) to find out! &#xA;&#xA;See lu.ma/gy3l2qek for further details.&#xA;https://lu.ma/gy3l2qek</description><pubDate>01 May 2025 21:06 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lo57nzzlak2j</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lobpqyznsk24</link><description>I have a new website (https://sites.google.com/view/adria-moret), where I&#39;ll compile my papers (published and under review) and my upcoming presentations.</description><pubDate>03 May 2025 16:05 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lobpqyznsk24</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lo3voquy522e</link><description>My paper &#34;AI Welfare Risks&#34; has been accepted for publication at Philosophical Studies!&#xA;&#xA;I argue that near-future AIs may have welfare, that RL and behaviour restrictions could harm them, that this poses a tension with AI safety and how AI labs could reduce such welfare risks. 1/</description><pubDate>01 May 2025 08:35 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lo3voquy522e</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lo57nzzlak2j</link><description>What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?  &#xA;&#xA;Join me on this Rethink Priorities webinar (May 28th) to find out! &#xA;&#xA;See lu.ma/gy3l2qek for further details.&#xA;https://lu.ma/gy3l2qek</description><pubDate>01 May 2025 21:06 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lo57nzzlak2j</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lo3voquy522e</link><description>My paper &#34;AI Welfare Risks&#34; has been accepted for publication at Philosophical Studies!&#xA;&#xA;I argue that near-future AIs may have welfare, that RL and behaviour restrictions could harm them, that this poses a tension with AI safety and how AI labs could reduce such welfare risks. 1/</description><pubDate>01 May 2025 08:35 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lo3voquy522e</guid></item><item><link>https://bsky.app/profile/adriamoret.bsky.social/post/3lkqk2ziudk2j</link><description>The presentation of my paper on the causes of AI Welfare Risks (and how to mitigate them) at @upf.edu&#39;s Law &amp; Philosophy Colloquium has just been uploaded! 👇&#xA;&#xA;https://www.youtube.com/watch?v=aArsvFlu4D8</description><pubDate>19 Mar 2025 15:52 +0000</pubDate><guid isPermaLink="false">at://did:plc:ifwi324yd3nvhjoqpobcnwjc/app.bsky.feed.post/3lkqk2ziudk2j</guid></item></channel></rss>