Launched 2021 Public Statements

James Henderson

Principal author and maintainer behind AI Resource Zone and its sibling ecosystem projects.

Share LinkedIn X Facebook

Overview

jameshenderson.online is the public profile of James Henderson, the principal author and maintainer behind AI Resource Zone and the set of sibling projects described across this ecosystem page. It serves as a founder hub: a place to see what he is currently building, which projects are actively maintained, and how the different properties relate to each other.

The site is intentionally lightweight. It links out to the working projects rather than duplicating their content, and it keeps its own posts focused on methodology, tooling, and ecosystem-level decisions. Readers arriving from one of the sibling sites can use it to orient themselves: which projects share code, which share editorial standards, and which are experimental sandboxes rather than standing products. That orientation matters because the ecosystem sites vary in audience and purpose, and a founder hub is the most useful place to publish the conventions that tie them together.

Visit the project

jameshenderson.online

Launched
2021
Last editorial review
Apr 19, 2026
Visit jameshenderson.online →

Opens in a new tab. You are about to leave AI Resource Zone.

How they use AI responsibly

As the founder hub for the ecosystem, jameshenderson.online carries a slightly different responsible-AI burden from the projects it links to. It is not itself a product with customers or a coalition with members; it is an editorial surface that describes how the connected projects are built, reviewed, and maintained. The practices below reflect that framing, and they apply with unusual force because the maintainer and the person writing about the maintainer are the same person.

The first practice is ecosystem disclosure. Every essay, project page, and cross-link on the site makes it plain when a referenced property is part of the same ecosystem rather than a third party. This matters because an uncritical-looking endorsement of a sibling project is not actually uncritical: it is the maintainer endorsing his own work. The site's pattern appears to be to state the relationship up front, then describe the project, then invite readers to evaluate it directly. Editor's note: this is the same disclosure pattern we have used on every ecosystem page on AI Resource Zone, and it exists because readers deserve to know when they are reading a neighbor's review rather than an outsider's. When the same person publishes a review and runs the reviewed site, the only honest move is to say so loudly and let the reader discount accordingly.

The second practice is an open-source posture. Where code is involved, the site appears to prefer linking to the working repository over describing it in prose. That lets readers inspect the current state rather than rely on a written summary that may already be out of date. Licensing, change history, and outstanding issues are visible at the source. Where tooling is used to generate or edit content on the site, the site appears to say so and to keep the underlying changes reviewable. For a founder hub, this matters because the site's credibility is tied to whether its claims about its own practices can be checked, not merely read.

The third practice is a posture on AI-assisted writing. Modern editorial work, including the work on this very ecosystem page, routinely uses AI tools to draft, rephrase, and expand material. The site appears to treat those tools as drafting aids rather than authors of record: a generated paragraph is a candidate for review, not a finished artifact, and the human reviewer is the person whose name sits on the piece. That review includes checking that the piece actually says what the reviewer believes, that no citations are hallucinated, and that any claims about third parties are ones the reviewer is willing to defend. Readers who want to know whether a given essay used AI assistance are told that many did; what the site commits to is that the human is accountable for the final text. In practice this means an AI-assisted essay that the reviewer cannot defend paragraph by paragraph does not ship, even when the deadline is pressing and the draft looks plausible.

The fourth practice is a transparent changelog on the ecosystem itself. Projects come and go, priorities shift, and sometimes a property gets deprecated. The site appears to maintain a visible record of which projects are active, which are paused, and which have been retired, so that a reader arriving on an old link can understand what they are looking at. This is a small discipline, but it is the difference between a living ecosystem and a graveyard that the reader has to discover on their own. When a retired project is referenced from an old post, the post is updated with a short note rather than quietly left to decay.

The fifth practice is caution with personal content. A founder site is also a personal site. It inevitably contains photos, anecdotes, and sometimes views that are not shared by every sibling project. The site appears to keep personal material clearly separated from ecosystem coverage, so that a reader interested in one axis does not end up making assumptions about the other. Opinions are labeled as opinions; project descriptions are labeled as project descriptions. A reader who only cares about the technical work can read the technical section without wading through a personal essay, and a reader who only cares about the personal voice is not subjected to a product overview.

The sixth practice is conservative data handling. A founder site has no business reason to build a detailed profile of its visitors, and the site appears to reflect that by keeping analytics minimal and third-party scripts few. The less data it collects, the less data there is to mishandle later, and the less compliance surface it has to worry about. For a founder hub where many visitors arrive via a sibling project, minimal analytics also prevents the accidental cross-project profiling that a careless setup would otherwise produce.

The seventh practice is honesty about mistakes. When something ships with an error — a wrong attribution, a stale link, a claim that turned out to be overstated — the site appears to correct in place and to note the correction rather than silently rewrite history. A site that never publishes corrections is either lucky or hiding them. Editor's note: because this site belongs to the same person who publishes AI Resource Zone, we are the least neutral possible voice on this page; we describe public practices and invite readers to verify them by visiting the site directly before drawing any conclusions about the ecosystem as a whole.

Principles others can apply

Practices this project demonstrates that other teams can borrow.

  1. 1

    Disclose your ecosystem in every review

    When you write about a sibling project, state the relationship up front. A neighbor's review is not an outsider's review and readers should know that.

  2. 2

    Link to live code instead of describing it

    Repositories have the current state; prose descriptions drift. Licensing, issues, and change history belong where they can be inspected.

  3. 3

    Treat AI writing tools as drafting aids

    Generated paragraphs are candidates for review. The human whose name is on the piece is accountable for every claim it makes.

  4. 4

    Publish a visible ecosystem changelog

    State which projects are active, paused, or retired. A reader on an old link deserves to know what they are looking at.

  5. 5

    Separate personal content from project coverage

    Founder sites mix personal and professional material. Label opinions as opinions so readers do not extrapolate across axes.

  6. 6

    Collect as little visitor data as the site needs

    There is no business reason to profile readers of a founder page. Minimal analytics and few third-party scripts reduce later risk by design.

Related topics

Other sibling projects worth reading alongside this one.

  • Launched 2024

    AI Coalition

    A community hub for practitioners, educators, and advocates aligned on AI safety and literacy.

  • Launched 2023

    AI Meetup

    Monthly online and in-person meetups with talks, demos, and peer discussion for AI builders.

  • Launched 2023

    BizBase

    A small-business operating system that makes AI-assisted automation approachable for local companies.

Ready to see the project itself? Visit jameshenderson.online →