Nordisk Kompagni
March 2026

We've Been Building for This Since 2017

By Paul Ostergaard

In 2017, we started a human-AI executive assistance service called MySigrid. The premise was simple: busy founders shouldn't spend their time on admin. We'd handle it — with a combination of skilled human assistants and whatever AI tools we could get our hands on.

This was before ChatGPT. Before "human in the loop" was a pitch deck slide. The AI tools available in 2017 were crude by today's standards, but they were enough to start seeing something interesting.

Every time one of our assistants worked with a client, two things happened. First, they completed the task — booked the flight, organized the inbox, prepared the briefing. Second, they learned something about how that client thinks. Their preferences. Their priorities. The way they make decisions when the options aren't clear.

Over time, this accumulated into something valuable. Not a dataset in the traditional sense — something more like institutional knowledge. The kind of understanding that means a good assistant doesn't ask you which airline you prefer, because they already know. They don't ask how you like your meeting briefings structured, because they've learned from a hundred previous corrections.

We recognized early that this knowledge was too valuable to leave scattered across inboxes and chat logs. So we built systems to capture it — every correction, every preference, every "no, not like that — like this." Our staff didn't always love the overhead. But we insisted, because we could see that the learning was the real product. Every correction was a signal. Every preference was a data point. The most valuable thing in the entire interaction wasn't the task completion — it was the judgment embedded in the correction.

We started calling the gap between what the AI suggests and what the human actually does the Action Delta. And we believe it's the most undervalued signal in the entire AI industry.

Here's the thing nobody was talking about: in most AI tools, that signal vanishes. The user corrects the output, gets their result, and moves on. The correction — the moment where human judgment is most visible — goes nowhere. Or worse, it goes to the vendor.

Every prompt teaches the model something. Every correction refines it. Every preference, every workflow, every judgment call — it all flows into systems the user doesn't own, can't inspect, and can't take with them. The vendor gets smarter. The user starts over every time.

We saw this because we were on both sides of it. We were the ones accumulating knowledge about how our clients think. And we could see how much value that knowledge had — and how completely absent it was from the tools our clients used on their own.

The AI industry talks about "personalization" as a feature. Memory, preferences, custom instructions. But personalization without ownership is just a more sophisticated form of dependency. The tool remembers you so you can't leave. The knowledge that makes it useful is the same knowledge that locks you in.

We started thinking about what it would look like if the learning stayed with the client instead of the vendor. If every correction made their system smarter — not someone else's. If the knowledge accumulated in infrastructure the client controlled, on their terms, in their jurisdiction.

That's what we're building now. We call it CognOS.

CognOS is a learning execution system for recurring knowledge work. It captures the Action Delta — the gap between what AI suggests and what humans do — and turns it into persistent, portable knowledge. Knowledge that belongs to the client. Knowledge that stays in infrastructure they control. Knowledge that compounds over time, making the system better and the cost of serving each client lower the longer the relationship lasts.

It's EU-hosted, privacy-first, and built in Copenhagen — because we've spent nearly a decade seeing what happens when organizations don't control their own knowledge infrastructure.

Nearly a decade of running a human-AI operation taught us something we keep coming back to: the output isn't where the real value is. The correction is. The moment where a human says "no, not like that" — that's the signal that captures judgment, preference, and expertise. And right now, almost nobody is keeping it.

Paul Ostergaard is the founder of Nordisk Kompagni. He serves as a Reserve Officer and Defence AI Adviser at Danish Defence Command, and sits on the board of Herlufsholm.