Understanding Order: The Category Theory Illustrated Guide
This article is adapted from “Orders” by Jencel Panic, part of the Category Theory Illustrated series. Licensed under CC BY-NC 4.0. Changes have been made to adapt the content.
Most people think they know what an order is. You line things up, smallest to biggest, first to last. Simple.
But mathematicians went and made it complicated, as they do. And honestly, it got a lot more interesting than just sorting your sock drawer.
The basic idea
An order is built from two things: a set of elements and a binary relation between them. The relation tells you which elements come before which. We usually write it as an arrow or the symbol ≤.
So far so good. But here is where it gets weird. Not all “order-like” relationships actually qualify as orders in the mathematical sense. To be an order, your relation has to play by certain rules.
The article breaks this down by showing that any order can be represented as a set of pairs, like in programming when you write a sort function that returns true or false for each pair. That function is basically defining which pairs belong in the order and which do not.
Linear order: the obvious one
When most people say “order,” they mean linear order. Every element is comparable to every other element. Given any two, one comes before the other. No ambiguity.
Think numbers on a number line. Or colors sorted by wavelength. Or the alphabet. Pick any two elements, you know exactly where they stand.
A linear order has to obey four laws:
Reflexivity means every element is related to itself. That’s what the ≤ symbol actually means here: “less than or equal to.” Not strictly less. You can be equal to yourself, which sounds trivial but matters.
The article notes that this law only exists to cover the base case. You could define it the opposite way and get a different type of order, sometimes called a “strict” order.
Transitivity is the real backbone. If a is before b, and b is before c, then a is before c. This is how rankings actually work. If I beat my grandmother at soccer, and she beats her friend, then I beat her friend. Otherwise the whole concept of “better at” falls apart.
The article uses a great example here: if I am better at soccer than my grandmother, and she is better than her friend, then I am automatically better than her friend. Otherwise the ranking does not actually work.
Antisymmetry prevents ties. If a ≤ b and b ≤ a, then a must equal b. You cannot have two distinct elements where each is “less than or equal to” the other. In a fair ranking, either I beat you or you beat me.
This also means no ties are permitted. Either you are better or I am better.
Totality requires that any two elements are comparable. For any a and b, either a ≤ b or b ≤ a. No “I don’t know” answers.
The article makes an interesting point: totality actually makes reflexivity redundant. If everything is comparable to everything, then any element is comparable to itself by default. But they keep reflexivity in the definition for reasons that become apparent when you remove totality.
The first three laws seem reasonable. The fourth is where things break down in real life.
The order of natural numbers
The article points out that natural numbers are the quintessential linear order. In many ways, every finite order of objects is isomorphic to a subset of the natural numbers. You can map the first element to 1, the second to 2, and so on.
This means all linear orders of the same size are essentially the same. All finite linear orders look the same when you strip away what the elements actually are. The article calls this “the most boring order ever” from a category-theoretic viewpoint. Every linear order diagram looks the same.
Partial order: when things get interesting
Totality is the law that asks too much. In the real world, not everything can be compared.
Imagine ranking soccer players. You can rank everyone in your friend group because they all played each other. But throw in some random person who never played any of you, and the ranking falls apart. You cannot compare them to the group.
Remove the law of totality from linear order and you get a partial order (or partially-ordered set, called a “poset”).
Every linear order is also a partial order. But partial orders show up everywhere. The folder structure on your computer. Task dependencies in a project. The way scientific papers cite each other.
The interesting thing about partial orders is that they cannot give you a definite answer to “which is bigger?” But they can still tell you who is the greatest, if one exists.
The greatest element is bigger than everything. The least element is smaller than everything. In a partial order, these might not exist, and that is fine. Some partial orders have neither. Some have one but not the other.
Chains
The article introduces an important concept: chains. A chain is a linearly-ordered subset of a partial order.
In the soccer example, you might have separate groups of friends who have all played each other, forming chains. But between chains, there is no comparison.
The chains do not have to be completely disconnected. They can connect as long as the connections are not all one-to-one. If the last element of one chain connects to the first element of another, they effectively merge into one chain.
Joins and meets
Here is where partial orders get really useful.
The join of two elements is the smallest element that is bigger than both of them. Think of it as the least common ancestor in a hierarchy.
The formal definition: the join of A and B is an object G such that A ≤ G and B ≤ G, and for any other object P where A ≤ P and B ≤ P, we also have G ≤ P. So G is bigger than both, but smaller than anything else that is bigger than both.
The meet is the opposite. It is the biggest element that is smaller than both.
In a partial order of numbers sorted by division, the join is the least common multiple. The meet is the greatest common divisor. In a partial order of sets by inclusion, the join is the union and the meet is the intersection.
The article points out something nice: in the color-mixing partial order, the join of any two colors is literally the color you get when you mix them.
If two elements have several upper bounds that are equally big, none of them is a join. Joins must be unique. But if one is established as smaller than the rest, it immediately qualifies.
Hasse diagrams
The article uses these diagrams called Hasse diagrams to visualize orders. The rule is simple: bigger elements are always positioned above smaller ones.
This means if you add an arrow to a point, the point the arrow points to must always be above the one the arrow comes from.
This arrangement lets you compare any two points just by seeing which one is above the other. You can find the join of two elements by seeing which elements they connect to and finding the lowest one.
The partial order of numbers by division
Here is a cool example from the article. Numbers form a partial order if we order them by which divides which. If a divides b, then a comes before b. So 2 and 5 come before 10, but 3 does not.
The join of two numbers in this partial order is their least common multiple. The meet is their greatest common divisor. The same operations that show up in elementary school math have this deep categorical meaning.
The inclusion partial order
Another example: given a collection of sets that share some elements, you can order them by inclusion. Set A comes before set B if A includes B (if B is a subset of A).
In this case, the join of two sets is their union, and the meet is their intersection.
The article points out something clever: the color-mixing partial order is isomorphic to the inclusion order of sets of primary colors. Red, blue, and yellow are the “prime elements.” Every color is a combination of these.
Birkhoff’s representation theorem
This is the payoff.
The article explains that any finite distributive lattice can be represented as an inclusion order of its join-irreducible elements. These are the elements that are not the join of any other elements, essentially the “prime” building blocks.
The partial orders that are not distributive lattices are still isomorphic to inclusion orders, but they do not contain all possible combinations of elements.
The theorem connects abstract order theory to concrete set structures. It tells you that many partial orders you encounter are just different ways of representing which combinations of basic elements are possible.
Lattices
A lattice is a partial order where every pair of elements has both a join and a meet. Most partial orders created from real rules are lattices.
The article notes that to make the color-mixing partial order a full lattice, you need to add a greatest element (black, containing all colors) and a least element (white, containing no colors). Otherwise the top elements would not have a join and the bottom elements would not have a meet.
A bounded lattice is a lattice that has both a greatest and a least element. All finite lattices are bounded.
Preorder: the loosest version
Now let us break another law. Drop antisymmetry, and you get a preorder.
A preorder lets two elements point to each other. If a ≤ b and b ≤ a, they do not have to be equal. They could just both be “connected” somehow.
In a preorder, you can have cycles. Think about “has beaten at soccer” as a relation. You can lose to someone who beats someone who loses to you. The preorder captures indirect wins.
Preorders have only two laws: reflexivity and transitivity. That is it.
The article shows a great diagram here: you start with direct wins, then transitivity adds indirect wins automatically. A weaker player who beats a stronger one creates a “cycle” where everyone is connected to everyone else.
Preorders and equivalence relations
This is where it gets really interesting.
The article draws a table showing the relationship between equivalence relations, preorders, and partial orders. They all have reflexivity and transitivity. Equivalence relations add symmetry. Partial orders add antisymmetry. Preorders add neither.
So preorders sit in the middle.
Any subset of objects that are connected both ways in a preorder forms an equivalence relation. You can group all elements that have this mutual connection into equivalence classes.
And here is the cool part: if you transfer the preorder connections between these sets to connections between the sets themselves, those connections follow antisymmetry. So the equivalence classes themselves form a partial order.
In short, for every preorder, you can define the partial order of its equivalence classes.
Orders as categories
This is where it gets weird and wonderful.
Remember how categories work: objects and morphisms between them. Each object has an identity morphism. You can compose morphisms, and composition is associative.
Preorders fit this definition perfectly.
- Reflexivity gives you the identity morphism. Every element has a relationship to itself.
- Transitivity gives you composition. If a ≤ b and b ≤ c, then you get a ≤ c automatically.
A preorder is a category where there is at most one morphism between any two objects. Either a is before b, or there is no arrow.
Linear orders and partial orders are both preorders, so they are all categories too.
This means all the categorical machinery we usually apply to sets and functions applies to orders. Products and coproducts correspond to meets and joins. The diagrams commute automatically because there is only one way to get from here to there.
The article points out something important: since there is at most one morphism between any two objects, all diagrams in preorders commute. This is a special property.
Products and coproducts
The article connects this to categorical products and coproducts.
The categorical coproduct of two objects corresponds exactly to the join in orders. The definitions and diagrams are basically the same, just with “bigger than” replaced by “has a unique morphism to.”
And by duality, products correspond to meets.
So understanding joins and meets actually helps you understand the more general categorical concepts of products and coproducts.
Why this matters
Category theory loves abstraction. It looks for patterns that repeat across different domains.
Orders are one of those patterns. They show up in databases, in scheduling, in type systems, in logic, in physics. Understanding the abstract structure helps you see connections you would otherwise miss.
The article sums it up nicely: thin categories (categories with at most one morphism between objects) are a great way to explore categorical concepts in a simpler context before moving to more general categories.
The next time you encounter a ranking, a dependency graph, or a classification system, ask yourself: is this a linear order, a partial order, or a preorder? The answer tells you a lot about what you can and cannot do with it.