The Magic of Average: Why LLMs Make Simple the New Powerful
This article is my take on the Average Is All You Need essay from Rawquery, connected to my thoughts on the Enterprise Context Layer.
The big idea
In the age of LLMs, “average” has been redefined. It no longer means “mediocre” or “not good enough.” It means “sufficient for most real-world needs, available instantly, without needing expensive expertise.”
That shift is actually magical.
The Average Is All You Need essay makes a compelling case. Before LLMs, if you wanted to produce anything useful, you either needed to be good at it yourself or pay someone who was. Average writing took time. Average code took skill. Average data analysis meant hiring someone who understood SQL, joins, attribution models, and chart generation.
Now average is cheap, incredibly fast, and accessible to anyone with a prompt.
What changed
The essay breaks this down by field:
Writing: Anyone can now publish average text with average ideas for an average audience. Before, you had to fight hard for sub-par. Now you settle for average or do better if you want to put in the effort.
Software: The same thing is happening. LLMs write average code, average tests, average documentation. The baseline has shifted.
Data analysis: This is where the article focuses, and it is the most relevant to my own thinking.
The problem with data has never been that people do not understand their data. Most people intuitively know what is in their organization’s data. They can “feel” what is hidden in it. The problem is the technical barrier: writing SQL, joining tables, understanding syncing strategies, building charts.
You know who does all of those things incredibly well at an average level? Any LLM.
The essay describes a concrete scenario. You have Stripe transaction data and HubSpot email campaign data. You want to know if your “spring-sale-2026” campaign actually affected revenue. In the old world, this required a data engineer, an attribution model (which is “some sort of wankery,” as the essay puts it), and probably a meeting to argue about whether it should be last-click or first-click attribution.
In the new world, you just ask in plain English: “Did the email campaign increase average basket size compared to people who did not receive the email?”
The LLM writes the SQL. It joins Stripe customers to HubSpot email events. It generates the analysis. It shows that the email cohort had a 46% higher average basket. You did not write a single line of code. You did not set up an attribution model. You asked a question and got an answer.
Then you ask a follow-up: “Break it down by week to see if the effect wore off.” The LLM refines the query. It shows the effect decaying over three weeks. Week one: plus 50% average basket. Week four: plus 15%. You now know something useful about your campaign. Still no data team involved.
Then you say: “Save that as a chart and give me a link I can share with my manager.” The LLM creates the visualization and gives you a public URL. Your manager clicks it, sees the chart, sees the data. No login required. No Looker access. No dashboard permissions.
All of this takes less than five minutes. The SQL is average. The joins are average. The chart is average. And it was amazing.
The core philosophy
The essay sums this up in two phrases:
“It deals with the average. You deal with the thinking.”
This is the key insight. The LLM handles the mechanics: writing the query, running the analysis, creating the chart. You handle the thinking: asking the right question, interpreting the result, deciding what to do next.
And:
“Average is clearly magic; prove me wrong.”
The magic is not that average has become good. The magic is that average has become instant, cheap, and accessible. You no longer need to hire an expert to get a functional answer. You no longer need to learn a technical skill to understand your own data. You no longer need to wait for a meeting with the data team to get a simple question answered.
Connecting to the Enterprise Context Layer
This brings me to something I wrote about earlier: the Enterprise Context Layer (ECL) pattern from Andy Chen.
The ECL is a system where twenty parallel LLM agents work together to encode how a company actually works. They read from primary sources (Slack, Jira, code, Gong transcripts, policy docs) and write synthesized Markdown files with inline citations. Every claim is backed by a source. Conflicts between sources are documented explicitly rather than resolved silently. The system maintains itself: agents claim tasks through Git-based locking, execute, and push their updates.
This sounds like a complex, elite system. But looking at it through the lens of the “average is all you need” philosophy, it is actually quite simple.
The ECL does not require a knowledge engineer to build an ontology. It does not require a semantic layer expert to create a taxonomy. It does not require a data scientist to build a retrieval pipeline. It requires a Git repo, some Markdown folders, and an LLM that can read sources and write with citations.
That is average. And it works.
Think about what the ECL actually produces. It produces answers to questions like “how long do we keep data after a customer churns?” The correct answer, in Chen’s system, is often “do not answer this yourself; route it to the security team.” That is not a sophisticated AI answer. It is simple routing based on synthesized context.
The system encodes institutional knowledge that would normally live in Slack threads, Gong calls, and engineers’ heads. It surfaces the judgment calls that retrieval systems miss. But it does not do this through some advanced AI technique. It does it through a simple pattern: read sources, write synthesis, cite everything, document conflicts.
Average complexity. Extraordinary output.
The pattern across both
What connects these two examples is a shared philosophy: stop building elite systems for routine needs.
The author of Rawquery did not build a sophisticated analytics platform with a custom query language and a team of data engineers. They built a system where LLM agents can read your data sources and answer questions in plain English. The mechanics are average. The access is magical.
The Enterprise Context Layer did not build a knowledge graph with ontological engineering and semantic layers. They built a Git repo where agents write Markdown with citations. The infrastructure is minimal. The synthesis is profound.
Both systems work because the hard part is no longer the mechanics. The mechanics are what LLMs do at an average level, which is now more than sufficient. The hard part is the thinking: knowing what questions to ask, knowing which answers require routing versus answering, knowing what to do with the insight once you have it.
What this means for organizations
If you are building internal tools, ask yourself whether you are solving an elite problem or an average one.
If the problem requires elite expertise (building a model that predicts customer churn with 95% accuracy, designing a system that understands all of your company’s knowledge), then you probably still need specialists.
But if the problem is “average” (answering simple questions about your data, keeping documentation up to date, synthesizing how the company works, routing questions to the right team), then LLMs have already solved the mechanics. You just need to build the interface that lets people ask questions and the synthesis layer that makes answers trustworthy.
This approach to data and the ECL approach to knowledge are both examples of this. They do not try to replace human thinking. They offload the mechanical part to LLMs and keep the thinking part for humans.
Why this matters
The essay ends with a challenge: “Average is clearly magic; prove me wrong.”
I think the magic is real, and here is why.
For decades, we have been told that we need experts for everything. Need to analyze data? Hire a data analyst. Need to understand your company? Hire a knowledge manager. Need to write well? Hire a communications team.
But most of what people need is not elite expertise. It is an answer to a simple question. It is context about how something works. It is a quick analysis that informs a decision.
LLMs now handle the average version of all of these tasks well enough that you can act on them. Not perfectly. Not at the level of a senior data scientist or a chief knowledge officer. But well enough to make decisions, ship products, and serve customers.
The bottleneck has shifted from execution to thinking. And that is where humans add value.
Both the ECL pattern and the article pattern are both examples of this shift. They are not AI magic. They are “average” mechanics that enable extraordinary thinking.
Related articles:
- The Enterprise Context Layer: Synthesis Over Retrieval: my deep dive on Andy Chen’s pattern
- The Rise of Computer Use and Agentic Coworkers: a16z thesis on computer-using agents
Crepi il lupo! 🐺