⚙ Methodology How the signal gets made v1.0 · 2026-04

How we score the signal.

Where the data comes from, how Grok scores it, what the numbers mean, and why you can (and can't) trust this page. No black boxes.

LLMgram is a curation layer. It doesn't produce news — it ingests public AI content from ~100 sources, runs it through a language model to extract structure, and ranks by a per-item signal score. Everything on this page explains exactly how that happens, because if you can't see how a score is made you shouldn't trust it.

Data sources

Every surface on llmgram pulls from a specific, named upstream. Nothing is manually curated in the sense of "we picked what we like" — everything comes from a feed, an API, or a scraper that runs on a fixed schedule.

SurfaceUpstreamMethodCadence
AI Signal106 RSS feeds (labs, blogs, research)RSS fetch + Grok analysisEvery 2 h
Git SignalGitHub API (curated repo list)Repo metadata + README + GrokEvery 2 h
AI PapersOpenAlex + SSRNAPI query on tracked authors + GrokEvery 2 h
LLM ArchitecturesSebastian Raschka gallery + handwritten notesManual + checker cronWeekly
Hermes Live@Teknium Twitter + GitHub PR watchScrape + digestHourly
Claude Code LiveTop CC voices TwitterScrape + digestHourly
Company RadarsPer-lab Twitter + public profilesScrape per labVaries
AcademyHao Hoang — Top 50 LLM Interview QuestionsStatic, with permissionOn update

What "Grok-scored" means

Every ingested article, repo, and paper passes through Grok (xAI's language model, via API). Grok is asked a structured prompt that extracts four things from each item:

How the signal score is prompted

The Grok prompt asks for a score based on a weighted mix of novelty (is this new information, or the Nth write-up of the same thing?), importance (does this change how practitioners work?), rigor (is there evidence, or is it hype?), and freshness (when was this published?).

What a score means in practice:
0.85 + Rare. Genuinely important — models launches, breakthrough papers, major framework releases. These are the items you'd regret missing.

0.50 – 0.84 Useful context. Good reads, but skippable if you're pressed for time.

Below 0.50 Noise. Kept in the corpus for search, hidden from default views.

Limitations you should know

Grok is a language model. That means the scoring is opinionated and inconsistent across runs. We mitigate but don't eliminate this:

Refresh cadence

Different surfaces refresh at different rates, based on how fast the upstream signal changes.

What we don't do

Provenance & source code

LLMgram is built and operated by supersocks.io as a public lab notebook. Pipelines, HTML, and data are in private repos for now. Every content item links back to its original source — nothing is published without attribution.

If you find a miscategorized item, a stale score, or a bug in the scoring, ping @iamsupersocks on X/Twitter or @llmgram.

Changelog