Editorial Policy

How AI Resource Zone writes articles, moderates statements, labels AI-assisted content, and corrects mistakes. Public-information work needs a public policy.

Last updated:

Why this matters

AI Resource Zone presents itself as a public-information resource. That claim only holds if there is a clear policy behind it. A reader who arrives at a page about a government guidance document or an AI safety topic should be able to tell, quickly, whether what they are reading is reported fact, editorial opinion, or a machine-generated summary. They should also be able to tell who wrote it and when it was last reviewed.

This page is the policy. It covers how articles are written, how statements are moderated, how AI-assisted content is labeled, how corrections are handled, and what we will not publish. It is updated when our process changes, and the previous wording is preserved in the version history of the site so anyone can see what changed and when.

How articles are written

Research comes first. Before an article is drafted, the editor identifies the primary sources — agency publications, academic papers, official press releases, original consumer-protection guidance — and notes the publication date of each. Articles that depend on a moving target (for example, a piece of legislation in progress) carry a last-reviewed date alongside the published date.

AI-assisted drafting is allowed and disclosed. An editor may ask a language model to draft a section, summarize a long source, or suggest a structure. The editor is then responsible for checking every claim against the original source and rewriting where the model has filled gaps with plausible-sounding but unverified content. No statistic, date, framework name, or quotation is permitted unless it can be traced back to a real source. Fabricated citations are a publication-blocking issue.

Opinion paragraphs are clearly framed. When a section reflects the editor's view rather than reported material, it is prefixed with "Editor's note:" and the framing makes the speaker explicit. Where an article describes a contested area we summarize the strongest version of each side rather than collapsing to a single conclusion. The reader is owed the disagreement, not a tidied-up consensus that hides it.

How statements are moderated

Reader-submitted statements enter a pending queue. A moderator reviews each entry and either approves it, rejects it with a recorded reason, or flags it for a second opinion. The moderator audit trail is stored on a StatementReview record so we can see who reviewed what and when.

Approved statements are labeled "Community opinion, not fact" everywhere they appear: on the statements index, on a topic page, in a country view, in the metadata served to crawlers. The opinion label is structural — it is a property of the statements model, not a footer disclaimer.

Rejection reasons are recorded against the statement, not on a separate moderator pad. A submitter who asks why their statement did not appear can be told the recorded reason. We do not silently bin contributions, and we do not publish a contribution that has been rejected on the assumption that the moderator would change their mind.

How AI content is labeled

The Chat Center carries an inline disclaimer on every session reminding the reader that responses are generated by a small open-source model and are not legal, medical, or financial advice. Replies that fall outside the persona's defined scope are blocked rather than answered with a guess. Each persona has an explicit list of topics it will and will not handle, and each chat reply is recorded so the moderator can audit how the model behaved over time.

Elsewhere on the site, any sentence that originated from a language model and was not rewritten by a human is labeled as a generated summary. Machine-translated text is flagged inline with a note to the original language and the translator (model name and version) so a reader can take the translation with appropriate uncertainty. A reader should never have to guess whether a sentence is the editor speaking or a model speaking.

Corrections policy

Errors are flagged through the contact form using the Correction category. Every correction is acknowledged and assessed. Minor corrections — a typo, a broken link, a wrong date — are quietly fixed and noted in the page history. Substantive corrections — a factual error, a misattributed quote, a misrepresented position — are appended to the article with a dated correction note at the foot, and the correction is linked from the contact reply.

Major corrections that change the conclusion of an article are republished with a changelog entry and, where the article was widely shared, a short note at the top of the article so a returning reader sees the change immediately. We do not silently rewrite history.

When a correction is requested but rejected, we record the disagreement in the contact-form audit trail and reply to the requester with the reasoning. Disagreement is not treated as an attack; an editor's responsibility is to look at the evidence again, decide, and document the decision.

What we won't publish

We do not publish unverifiable claims about named individuals. If a statement makes a serious accusation about a person and we cannot trace it to a public source, the statement is rejected. We do not publish personal attacks. Critique of an organization, a policy, or a public figure's published positions is welcome; abuse of an individual is not.

We do not publish material that purports to be medical, legal, or financial advice for specific situations — the chatbot is explicit about this and so are we. We do not publish content scraped from gated sources where the gate exists for a legitimate reason. Duplicate statements — the same body submitted multiple times under different display names — are merged or rejected.

We do not publish content generated end-to-end by a language model and presented as if a human wrote it. Where machine assistance is used, the assistance is disclosed and a human editor signs the work. Plagiarism, AI-laundered or otherwise, is treated as a publication-blocking issue and the article is withdrawn for rewrite if it is found after the fact.