NewsPeak Logo

The Newspeak Bias Rubric

An editorial framework for measuring structural bias in news articles

Each article is analyzed across nine distinct dimensions of bias. These dimensions are grounded in journalistic principles but adapted to reflect the structural and psychological forces that influence perception. They are scored from 0.0 to 1.0, with 0.0 representing high objectivity and 1.0 representing strong bias.

1. Framing

What lens is used to interpret events?

Framing assesses how the narrative positions the reader — emotionally, ideologically, or morally. This includes word choice, metaphors, and what kind of story is being told. Is the event framed as a tragedy, a triumph, a scandal, a struggle?

  • Objective articles avoid telling readers how to feel or what lens to apply.
  • Biased articles often use framing to guide interpretation toward a specific worldview.

Example:

"Community leaders are outraged" frames a protest as righteous. "Rioters descend on downtown" frames it as lawless chaos.

2. Tone

What emotional valence underlies the story?

Tone examines whether the article conveys approval, disapproval, sympathy, or scorn — intentionally or through accumulation of subtle cues. It's about emotional undercurrents, not just overt language.

  • Objective tone is clinical, detached, and proportionate.
  • Biased tone emotionally steers the reader through repeated positive or negative cues about subjects or events.

Example:

"The official defended the policy" vs. "The official lashed out defensively" — same facts, different emotional coloring.

3. Omission

What critical perspectives or facts are left out?

Omission measures whether key elements necessary for understanding are absent. This includes opposing viewpoints, relevant history, or essential data that would materially alter interpretation.

  • Objective reporting includes dissenting views and key context, even if inconvenient.
  • Biased reporting excludes what doesn't fit the preferred narrative.

Example:

A story on police use of force that excludes eyewitness or victim accounts is omitting vital balance.

4. Sourcing

Whose voices are included — and whose are not?

Sourcing evaluates the diversity, credibility, and attribution of sources. It considers whether the article draws from a range of stakeholders, or relies on a narrow or ideologically aligned group.

  • High-scoring (biased) articles may rely heavily on unnamed sources, advocacy groups, or partisan officials without balance.
  • Low-scoring (objective) articles triangulate voices and provide transparency.

Example:

An article quoting only activist groups on a policy matter — with no input from lawmakers or experts — lacks source diversity.

5. Placement

What's emphasized early — and what's buried?

Placement refers to editorial choices around ordering: what appears in the headline, lede, and top paragraphs versus what's de-emphasized or omitted until late.

  • Objective structure reflects narrative relevance and proportionality.
  • Biased structure may lead with inflammatory claims while burying nuance or clarification.

Example:

If the article opens with "Angry crowds protest immigration raids" but only explains the legal context in the final paragraph, it scores poorly.

6. Context

Is the reader given the tools to understand the full picture?

Context evaluates whether the article includes background, comparative data, or systemic relevance. It checks for foundational material needed to situate the current event within broader trends or causes.

  • Objective reporting equips the reader with enough context to assess relevance and meaning.
  • Biased reporting deprives the reader of that frame, exaggerating or obscuring the story's significance.

Example:

Reporting on a spike in crime without including long-term crime trends or demographic data is contextually misleading.

7. Fact vs Opinion

Are facts clearly separated from editorializing?

This dimension assesses whether the article distinguishes between factual claims and interpretation or opinion. It tracks the presence of unsupported assertions, vague sourcing, or opinion stated as fact.

  • Objective journalism attributes opinion and separates it from reportage.
  • Biased writing blurs the line, subtly or overtly inserting the author's judgment into reported facts.

Example:

"The decision sparked justified outrage" is an opinion disguised as fact.

8. Loaded Language

Are words used to evoke emotion rather than inform?

Loaded language looks at whether emotionally charged or provocative words are used to sway perception. This includes verbs like "slam," "explode," "betray," or adjectives like "radical," "desperate," "shocking."

  • Neutral reporting favors precise, unemotional language.
  • Biased writing leverages language to provoke reactions.

Example:

"The senator wrangled her colleagues into submission" uses dramatic, combative phrasing instead of simply "persuaded" or "secured votes."

9. Headline vs Body

Does the headline truthfully reflect the article's content?

This checks for alignment between the headline and the article body. Misleading or exaggerated headlines are a primary source of reader misinterpretation, especially in social feeds.

  • Objective headlines are proportional and accurate.
  • Biased or clickbait headlines may exaggerate, mislead, or contradict the content they introduce.

Example:

A headline saying "Mayor implicated in scandal" that reveals in paragraph 8 that the mayor had no direct involvement is highly misleading.

Summary

Each of these dimensions reflects a layer of how modern journalism can unintentionally — or intentionally — shape perception. Taken together, they provide a rigorous lens through which to evaluate whether an article informs the reader… or persuades them.

But Aren't LLMs Biased Too?

Yes — large language models (LLMs) are trained on vast internet data and may inherit certain patterns or ideological tendencies. But those tendencies do not determine the output of Newspeak's scoring system.

Why? Because we don't ask the model to improvise or opine. We ask it to apply a fixed editorial rubric — defined by us — to detect specific structural signals in the text:

  • Is framing present?
  • Are facts omitted?
  • Is the tone neutral?
  • Do headlines match body copy?

These aren't political questions. They're structural ones. Each score must be supported by a clear, natural-language justification. And those justifications are:

  • Auditable
  • Comparable across models and temperatures
  • Repeat-tested for consistency and drift

Newspeak doesn't rely on the model's worldview. It uses the model's linguistic intelligence to detect how the article steers the reader. In this role, the LLM functions more like a bias microscope — not a writer.

We validate this rigorously: testing across models, temperatures, and multiple replications. If the model fails to behave consistently, we don't use it.

Bottom line: Our scoring doesn't ask what the model thinks. It asks what the article does.