I’ve been running fotbollsfeber.se for years. Swedish football stats — Allsvenskan, Superettan, Damallsvenskan. Match results, tables, top scorers, that kind of thing. It is one of those side projects that just keeps growing.

For a long time it was honestly pretty boring from an engineering standpoint. A web app, a database, some scrapers pulling data from a few sources. Useful enough. Nothing interesting.

The last few months I’ve actually been pushing it somewhere more interesting, so I figured I’d write it down.


The boring but necessary part: tearing out the routing

The codebase had accumulated the classic side-project sin of “I’ll just add this one exception here.”

Every league had its own route structure. Allsvenskan had its own page templates. European leagues had theirs. Some things were shared, some weren’t, and nobody (me) remembered which was which.

Adding a new league meant copying files, hunting down hardcoded assumptions, and praying nothing broke. It was the kind of codebase where you could add Premier League support in an afternoon if you were lucky, or spend a week chasing weird edge cases if you weren’t.

So I bit the bullet and unified it. One routing pattern, one data model, one set of components. Every league — Swedish, European, whatever comes next — goes through the same path.

app/[league]/[year]/[team]/page.tsx

The kind of thing that is genuinely satisfying to ship and genuinely hard to explain why it matters to anyone who hasn’t lived with the mess. But it unblocked everything else, which is what boring architecture work is supposed to do.

The most concrete example of what that unlocks: season simulation. For any league, you can now run a Monte Carlo simulation of how the rest of the season plays out — 10,000 iterations, based on current standings, team form, and remaining fixtures. The output is a probability distribution: what are the realistic chances each team finishes in each position? Who’s actually in a title race versus just mathematically still in one?

Before the unification, this would have had to be built and maintained separately per league. Now it’s available everywhere — Allsvenskan, Bundesliga, Premier League, whatever’s in the system. One implementation, all leagues.


The actually interesting part: Stryktipset

Stryktipset is a Swedish football coupon that has been running since 1934. Every week, 13 matches. You pick 1 (home win), X (draw), or 2 (away win) for each. If you get all 13 right, you win the jackpot.

It sounds trivial. It is not.

Draws happen a lot more than people expect. Favorites lose. And the odds the bookmakers offer are usually baked into what the crowd picks, which means blindly following the consensus is a losers game in the long run.

I built a probability engine for it.

The core idea: for each fixture, generate the best possible probability estimate for each outcome (1, X, 2) given everything I know about the two teams. Then compare those probabilities against what the crowd thinks and what the bookmakers are implying, and find where the discrepancies are.

That discrepancy is expected value. If I think home win has a 55% chance and the market is pricing it at 40%, that’s where the edge is.

How the probability chain works

The model runs a priority chain. For each fixture, it tries methods in order until it finds one it trusts:

  1. ELO ratings — if both teams have enough match history, use their ELO scores to model the match. Works well for well-tracked leagues.
  2. Poisson/goal rates — model expected goals for each team, then derive outcome probabilities from that distribution.
  3. ELO with Stryktipset context — a variant tuned for the specific mix of leagues Stryktipset tends to use.
  4. Opening odds — if nothing else works, infer probabilities from the bookmaker’s opening line, correcting for the margin.
  5. Baseline/prior — the last resort: historic 1X2 distribution for the fixture type.

Each fixture gets the best method available. The output is a probability triple (p1, pX, p2) that sums to 1.

Then there’s a blending step that mixes in crowd consensus (the “tio tidningar” newspaper picks) because the crowd is often more calibrated than you’d expect, especially for matches where data is thin.

Backtesting

The part I found most satisfying: I ran backtests against a few years of historical Stryktipset rounds, comparing what the model would have picked against actual results.

Against a naive baseline that just picks the most common outcome for each fixture type, the model lands about 10-11% better on forecast accuracy — both standalone and when blending in crowd consensus.

Not world-beating. But measurably better than guessing. And that gap is what makes the system worth running.


The data is the product

The thing that makes the Stryktipset model possible isn’t the code — it’s years of accumulated match data, historical rounds with results, opening odds, newspaper predictions. That’s what you can actually backtest against. The code is almost secondary.

This is the underrated thing about long-running side projects: they accrete data. Fotbollsfeber has been collecting match results since before I cared about any of this analysis stuff. Now that data is the foundation for something genuinely more interesting than a stats table.


What’s next

The model will get better as more rounds play out and I can tune the weights. I want to add more signal — team form curves, home advantage by league, how teams behave in certain weather conditions (genuinely useful for some leagues).

On the site side: more European coverage, better match detail pages, and maybe publishing the weekly model output publicly so people can see what it’s suggesting before each round.

If you’re into Swedish football or just curious what the site looks like: fotbollsfeber.se. The Stryktipset analysis is behind a login for now but the general stats are public. I also recently built a Mantine Theme builder — another side project, different domain, same itch.