AI Meetup
Monthly online and in-person meetups with talks, demos, and peer discussion for AI builders.
Overview
AI Meetup runs a recurring calendar of online and in-person gatherings for people who build with AI. A typical event combines a short talk, a live demo, and open discussion; the format deliberately favors working code and practitioner questions over marketing-deck walk-throughs. Sessions are accessible to newcomers but aim to give working builders something they did not already know.
The project is positioned as a venue rather than a media outlet: it hosts, records (when speakers consent), and publishes supporting materials, but it does not brand itself as a research organization or a certifying body. Attendees show up with different goals, from landing a first AI project to stress-testing a production deployment, and the meetup format tries to make those conversations possible in the same room. The tooling around the calendar, RSVPs, and recordings is standard enough to be forgettable, which appears to be the design intent.
Visit the project
aimeetup.co
- Launched
- 2023
- Last editorial review
- Apr 19, 2026
Opens in a new tab. You are about to leave AI Resource Zone.
How they use AI responsibly
The first area is RSVP data and attendee privacy. Any event site ends up holding names, email addresses, and sometimes employer details from the RSVP flow. AI Meetup appears to limit that collection to what is needed to confirm attendance and to send operational updates such as venue changes or calendar links. Attendee lists are not published, and speakers are given the choice of being listed publicly or privately before the event. When an event is recorded, recording consent is requested from the room at the start rather than assumed, and attendees who do not want to appear on camera are given a seat out of frame. These are ordinary privacy practices, but they matter: a meetup that treated its RSVP list as a marketing asset would quickly lose the community it serves, and in a field where many attendees work under NDAs, the signal of restraint is itself part of the value.
The second area is speaker vetting. AI Meetup appears to select speakers on the basis of working practice rather than vendor sponsorship. Demos are expected to work end to end and to use real inputs rather than curated ones; when a demo falls back to a scripted flow, speakers are asked to say so. This distinction matters because AI demos are unusually easy to cherry-pick, and a well-chosen example can create an impression of reliability that does not match production behavior. The format encourages honesty about failure modes rather than polished highlight reels, and speakers who describe what went wrong tend to get more follow-up interest than those who present a flawless run. Editor's note: we think the failure-mode norm is the single most valuable feature of the series, and we wish more conference circuits borrowed it.
The third area is handling the gap between live demo and production expectation. A meetup demo is, by nature, an optimistic artifact: the network is good, the prompt was tuned for the occasion, and the speaker is on stage rather than on call at 3 a.m. AI Meetup appears to frame demos as illustrations of a technique rather than as proof of production readiness. Q&A sessions routinely return to deployment concerns such as cost, latency, failure handling, observability, and on-call burden. That framing helps attendees leave with useful questions rather than with a shopping list of tools to adopt uncritically. When a speaker cannot answer a production question, the moderator appears to record the question for the recap rather than let it disappear.
The fourth area is accessibility. Running both online and in-person strands means the project has to make decisions about captioning, recording quality, and the post-event archive. AI Meetup appears to publish session recordings when speakers consent and to include captions where feasible. Slides and code references are linked in the event write-up so that asynchronous attendees get a meaningful artifact rather than a promotional blurb. This matters for responsible AI coverage specifically, because many attendees cannot take time off for a weekday evening event and rely on the asynchronous archive for continued learning. An archive that only works if you were in the room is an exclusion mechanism, not an accessibility feature.
The fifth area is community norms around contested claims. AI is a field where a single impressive demo can reshape how a roomful of builders talks about a problem for months. AI Meetup appears to encourage speakers to distinguish measured behavior from anecdotal impressions, and to flag when a claim is based on a single run, a benchmark, or a deployed system. The moderator's role includes surfacing those distinctions in Q&A when a speaker has glossed over them. This is a small intervention, but it keeps the room's discourse closer to evidence than to vibes, and it sets an expectation that repeat speakers carry forward into later sessions.
The sixth area is vendor neutrality. Sponsorship is a legitimate way to keep an event series running, but it can quietly shape the speaker roster. AI Meetup appears to keep sponsor content clearly labeled as sponsor content, not woven into talks that are presented as independent. Sponsor slots are announced as sponsor slots at the top of the evening and again in the recap. That separation preserves the trust that attendees place in the speaker-selection process, and it gives sponsors a clean way to participate without having to pretend they are not sponsors.
The seventh area is post-event content safety. Recordings are reviewed before publication for sensitive disclosures — a customer name mentioned in passing, a slide that accidentally reveals an internal endpoint — and speakers get a chance to approve the cut. Editor's note: because this site is part of the James Henderson ecosystem, we are not a neutral observer of these practices; we describe what the project states publicly and encourage readers to attend an event directly to form their own view.
Principles others can apply
Practices this project demonstrates that other teams can borrow.
-
1
Collect only the RSVP data you actually use
Confirm attendance and send venue updates. Do not treat the attendee list as a marketing asset, and ask consent before recording the room.
-
2
Select speakers on working practice, not sponsorship
Expect demos to use real inputs and to disclose when a flow is scripted. Sponsor content stays labeled as sponsor content, not woven into talks.
-
3
Frame demos as illustrations, not production proof
A stage demo has perfect conditions. Keep Q&A focused on cost, latency, failure handling, and observability so attendees leave with real questions.
-
4
Publish an asynchronous archive that works
Captioned recordings, linked slides, and code references let people who cannot attend still get the talk. That is the difference between a meetup and a party.
-
5
Moderate claims toward evidence
A single impressive run is not a benchmark. Moderators who ask "was that one run or many?" pull a room back from vibes toward measured behavior.
-
6
Keep sponsor labels visible
Sponsorship can keep the lights on without distorting the speaker roster, if it is clearly labeled and sits outside the independent-talk track.
Related topics
Other ecosystem projects
Other sibling projects worth reading alongside this one.
-
Launched 2024
AI Coalition
A community hub for practitioners, educators, and advocates aligned on AI safety and literacy.
-
Launched 2023
BizBase
A small-business operating system that makes AI-assisted automation approachable for local companies.
-
Launched 2022
Misa Solutions
A consulting studio applying LLM workflows to concrete business problems and reporting.
Ready to see the project itself? Visit aimeetup.co →