AI Safety Events Tracker November 2023
Welcome to the AI Safety Events Tracker newsletter. It lists events related to AI safety in the coming 6 months. Consult aisafety.events for more information, or to add or update events.
Developmental Interpretability Conference 2023, November 5-12, Oxford, England.
A conference on developmental interpretability and singular learning theory.
AI Safety Hackathon - Entrepreneur First x Apart x TU Delft, November 11-12, Delft Uni, Netherlands.
AI Safety Hackathon in the Netherlands, featuring experts from DeepMind and Entrepreneur First, for a chance to innovate in AI/ML and win mentorship opportunities at Apart Lab.
EAGx Virtual, November 17-19, Online.
Bringing together individuals who share the aim of effective altruism: to identify the world’s most pressing problems and the best solutions to them, and put them into practice.
Social Choice for Al Ethics and Safety, December 6-8, Bay Area, USA.
This invitation-only workshop will identify ways in which social choice theory can help make AI systems more ethical, safe, and aligned. It aims to formulate a research agenda and set up individual projects.
AI Alignment Workshop, December 10-11, New Orleans, USA.
The workshop will facilitate discussion and debate amongst ML researchers on topics related to AI alignment so that we can better understand potential risks from AGI and strategies for solving them.
ML Safety Social at NeurIPS, December 13, New Orleans, USA.
A social event on machine learning safety during the NeurIPS week.
AI Meets Moral Philosophy And Moral Psychology Workshop, December 15, New Orleans, USA.
NeurIPS workshop integrating moral philosophy and psychology with AI to advance computational ethics through expert talks, poster sessions, and diverse scholarly interactions.
Multi-Agent Security Workshop, December 16, New Orleans, USA.
NeurIPS workshop creating an AI security blueprint and providing a platform to share multi-agent security, AI safety, and policy research.
Socially Responsible Language Modelling Research Workshop, December 16, New Orleans, USA.
NeurIPS workshop focusing on ethical and responsible language modelling research, addressing challenges in security, bias, safety, and societal impacts, while promoting interdisciplinary collaboration.
Technical AI Safety Conference 2024, April 5, Tokyo, Japan.
Conference bringing together specialists in the field of AI and technical safety to share their research and benefit from each others’ expertise.
Open calls for participation
AI Safety Camp Virtual, a 3-month long online research program from January to April 2024. It is looking for Research Leads to head projects.
MATS Winter 2023-24, a scientific and educational seminar and independent research program, intended to serve as an introduction to the field of AI alignment. January 8 to March 15 in Berkeley, California. Deadline is November 17.
AI Safety Fundamentals Germany, 8-week multi-track program to learn more on AI safety. Deadline is November 19.
Constellation Visiting Researcher Program, an opportunity for visitors motivated by reducing catastrophic risks from AI to connect with leading AI safety researchers, exchange ideas, and find collaborators while continuing their research from Constellation offices in Berkeley, California. Deadline is November 17.
Astra Fellowship, pairing fellows with experienced advisors to collaborate on a two or three month AI safety research project, from January 4 to March 15, 2024. Deadline is November 17.
S-risk Intro Fellowship is a six weeks long program focused on s-risks in January and February 2024. Application deadline on December 7.
The AI Safety Events Tracker is curated collaboratively by the community. We invite you to improve it by adding or updating events with the website forms.
Thanks for your attention! For any questions or feedback, reach out at firstname.lastname@example.org.