I'm a member of technical staff at Anthropic, an AI safety and research company in San Francisco, building reliable, interpretable, and steerable AI systems (e.g.).
At the Autonomy, Agency, and Assurance Institute, I helped to reimagine cybernetics for the 21st century. How can we take cyber-physical systems safely to scale when they embed sensing, learning, and communications in everything from cars to hospitals to elections to intimate relationships?
I also volunteer for community organisations. Before moving to SF, I served on the board of the ACT Conflict Resolution Service and the executive of Effective Altruism ANU; and I continue to organise less formal events like mentored sprints at many conferences.
I regularly contribute to open source software, which empowers users and invites them to become creators, and have been recognised as a PSF Fellow. Lately I've focussed on making software testing easier and much more effective. I'm the lead developer of Hypothesis (and some extensions - here's a live demo! - and I co-maintain Pytest. I started hypofuzz.com to support open source development and sell next-generation testing and verification tools.
I often speak at conferences, and most of my public presentations are listed here. Highlights include exploring Anthropic's alignment and interpretability research in an invited talk (video) at StrangeLoop 2023, talks about property-based testing and structured concurrency at PyCon US, an expert deep-dive at PyCon Australia (transcript), and winning an AMOS presentation prize for my Honours research characterising Indigenous seasons (poster, thesis).
Open source on GitHub, publications via Google Scholar. I don't have any social media accounts, but you are welcome to send me an email instead!