Morten and I have spent over a decade learning that building successful startups isn’t just about cutting-edge tech or unlimited resources. In this blog, we’re going to share how asking the right questions, embracing constraints, and making incremental progress helped us scale two successful companies. It’s a story about frugal innovation and pragmatic engineering, where technical constraints pushed us to build better, more sustainable solutions.


I’ve spent over a decade building and scaling startups alongside my colleague and friend Morten. We first worked together at Endomondo, a sports tracking app where he was CTO and I joined as a backend engineer in 2012. After Endomondo’s acquisition by Under Armour, our paths crossed again at Too Good To Go, where I signed on in early 2018 and Morten joined as CTO a few months later. Through these ventures, we’ve learned valuable lessons about growing technology platforms under tight budget constraints. This is our story of building startups with a focus on frugality and practical solutions.

[S01E05] The Frugal Architect Podcast: Too Good To Go
0:00 / 0:00

Our journeys into technology started decades ago, albeit on slightly different paths. For Morten, it began in the 80s with Danish computers like the Picolo and the Comal 80 language – tools primarily used to teach kids back then and play pong. After studying engineering, years were spent at Nokia, deep in the world of embedded software for mobile phones.

My fascination also started young, thanks to a dad who worked with IBM mainframes and sometimes brought home huge portable PCs – the suitcase kind! That led to me typing in BASIC programs from magazines, modifying them through trial and error (at age eight), and poking around in games to find things like unlimited lives. Later, armed with a Turbo Pascal book from the library and a compiler that came with the family PC, my dream of making games spurred further exploration into the Demoscene world – being creative with limited hardware, low-level assembly, graphics before GPUs – all before the internet really blew things open and made learning accessible like never before. It’s truly great how easily we can get started on something new today.

Lessons in Frugality at Endomondo

Our professional paths eventually converged at a small Danish company called Endomondo. Morten joined in 2009 after leaving Nokia, drawn to the idea of using the emerging GPS technology in phones to track running. Through a chance connection with the founders, he became the second engineer at a company making almost no money. The first task? Building a Symbian sports tracker while the only other engineer built the server - quite literally and put in a cohosting space here in Copenhagen. It was just the two of them, figuring out APIs, making it work, and watching the first few thousand users trickle onto the platform. As it grew and gained some investment, they could finally hire more people. That’s when I came aboard, around 2012.

Joining Endomondo was a welcome change after starting my career at an insurance company. While some disciplines learned there was valuable, besides having strict processes to adhere to regulations, it was thinking about more than just your own code and considering very long term maintainability. The startup world offered a different kind of challenge. Stumbling upon the job post by chance, the mission resonated – having started running a few years prior. I joined as a backend engineer on their Java Apache Wicket system just as user numbers were hitting an exponential growth spurt.

The infrastructure, however, wasn’t quite keeping pace. In those early days, we were still dealing with physical servers, even making the move from a local Danish hosting company to AWS EC2 instances. But even on AWS, we weren’t “herding cattle” yet; we had “pets” – named servers like mobile-1, web-1, etc. That works for a start, but it doesn’t scale well, a recurring theme in our professional lives!

Thankfully, the underlying architecture was solid due to the previous CTO’s strict adherence to classic monolith application patterns. It was a very good start, but there was room for improvement, like implementing proper CI/CD instead of releasing from laptops – not a given back in 2012.

It was a fun, intense period. We decided to move fully cloud-based, shifting to AWS. I and a few others did most of the heavy lifting on the technical side. Suddenly you have to consider things like which services to use, how to get the most out of it, migration plan, and setup auto scaling, since it will cost you a lot to just do shift and lift when you pay per hour of use. Morten’s main contribution to that specific migration might have been physically driving out to the old hosting center, collecting four very dusty physical machines, and bringing them back to the office for scrapping!

Endomondo grew successfully and was eventually acquired by Under Armour. But a crucial aspect of its early days, back in 2012-2013, was our mindset regarding revenue. We thought charging customers was for losers, and advertising was even worse! Our only source of income was a few small partnerships. We had to be incredibly careful with spending, especially on AWS costs. Necessity bred a deep sense of frugality.

Cash for Growth, Not Upkeep

By now, Morten was CTO, and I was running much of the backend architecture alongside our single, very productive DevOps engineer. As a sports tracker, Endomondo generated vast amounts of data – GPS points, speed, heart rate, cadence. Today, time-series databases are common, but back then, we relied on self-managed MySQL running on raw EC2 instances (not even RDS initially). The application was architected well but traditionally: fully normalized, meaning lots of joins, strong consistency – sound computer science, but a nightmare for scaling with millions of users’ workout data in a single monolithic database (a normal approach before “microservices” became mainstream). We had to “undo” some of the strong consistency to allow referencing between different databases and avoid lock escalation on inserts. Additionally we denormalized tables to allow for faster queries without joins. These patterns were something we would use much later at Too Good To Go to go global.

That workout data clogged the core database and consumed memory. One of our first scaling moves was to split this data out into separate databases and adding horizontal sharding support at the same time, which bought us some time. We were still running databases on EC2, often using fast (but risky) ephemeral storage for the primary node and slower EBS volumes for backups, relying heavily on AWS stability as automatic failover wasn’t really baked in.

Costs kept climbing, and the team was swamped trying to scale and build features. The conversation inevitably happened: “Can we get more engineers?” The answer was tied to our budget: “We can’t afford it… unless you can save money on infrastructure.” That was the challenge. It’s a powerful motivation in a startup – you want cash going to growth, not just upkeep.

The solution involved changing how we stored older workout data. We asked ourselves “why keep all this data hot?” which led to the realization that most users rarely access workouts more than a month old. Anything older than a month or two got zipped up in batches and pushed into cheap AWS S3 storage. This required application changes to handle accessing and editing archived data, but it wasn’t an overly long project. Fairly quickly, we saved more than enough to hire that needed engineer.

This became our model for a couple of years: save money or grow revenue, then reinvest directly into engineering. We grew the team slowly, affordably, perhaps reaching 30 engineers out of 50 total staff at our peak. Being strict about technical spending was crucial. Scaling was a constant battle. I vividly remember those early startup days when everyone Technical was perpetually on call. And every Sunday when we had good weather, our servers would be hammered. People were out running, tracking workouts, and for too long, we’d buckle under the load, desperately trying to cope. We actually got a lifeline when AWS introduced EC2 instances with faster SSD ephemeral drives. For our IOPS-heavy database, that bought us at least a year or two of runway without needing a major re-architecture we couldn’t afford time or people for.

Through all this, we learned to be super strict about what we worked on. In a cash-constrained startup, you can’t just chase shiny objects. Our fundamental philosophy became: always ask why. What’s the real problem? Does it need solving now? Is it important enough? Maybe the goal is a new feature, or saving money. Start there. Thinking carefully before starting projects probably saved us the most money over time.

This approach, this frugality, isn’t just about pinching pennies; it’s about spending wisely and not overdoing things. It’s something we’ve carried with us. At Too Good To Go, you see frugality reflected in the core mission – preventing food waste.

Applying Our Frugal Philosophy to Food Waste

We both eventually felt the pull back towards building something smaller after Endomondo’s acquisition and integration. The former Endomondo CEO, Mette Lykke, connected with people in late 2016 who had a brilliant idea: help retailers waste less by selling surplus food as surprise bags via an app. This was the genesis of Too Good To Go. Mette invested, then became CEO. She inherited a very small tech team and a platform built rapidly – great for proving the concept, questionable for scaling. She reached out.

I took the bait first, joining in January 2018. The task felt familiar: scale a promising startup whose tech wasn’t ready for the growth ahead, though the potential scale felt even larger this time. Morten followed about five months later as CTO, reuniting the ‘old crew’. Mette was clear: resources were limited. We needed to grow the team and tech sensibly, sustainably.

We started by asking why. We assessed the tech, applied spot-fixes, and kept the business running. But some early experiences were… shocking. Morten recalls one evening during the 2018 Soccer World Cup. Denmark was playing, so the small dev team was likely occupied. Unbeknownst to us, our French country manager was live on national TV. Since France wasn’t playing, people were watching. They apparently liked what they saw, downloaded the app en masse, and promptly crashed the entire system. At that time we did have autoscaling on Compute, but no such thing on our Aurora MySQL - we were not even using the scaling capabilities of using Replicas for reading, so we quickly got taken down when the Database couldn’t handle the amount of queries. Funny now, terrifying then. Our big TV break, and nothing worked! It took hours to recover. We only had perhaps 100,000 daily users then – tiny compared to today – but the system couldn’t handle the spike.

Image from French Television
This was the interview that took us down.

We knew we had to rebuild, but a full rewrite or running two systems in parallel felt wrong. We opted for a gradual migration: rewrite and replace one API or component at a time, connecting the new code (Java, replacing PHP) to roughly the same underlying database initially. I got the task of architecting the plan for the first piece.

The French TV incident highlighted the fragility, but there were other recurring pains, like the insane midnight rush caused by releasing all the next day’s food listings simultaneously. Historically the initial implementation logic was using a day as the period for having sales - day being defined as CET Date - no timezone and flaky support for daylight savings time changes. That was why the system opened sales for the next day at midnight. Users literally set alarms for it! These pain points guided our migration strategy: tackle the biggest bottlenecks first. The philosophy remained: ask why, avoid surprises, choose reliable, known tech. Be boring, not stale. Don’t risk the founders’ investment on experiments.

Our first “why” question – “why did the site lock up?” led us to find that our feed generation API, written in PHP, had created tens of thousands of PHP processes that quickly consumed available memory, waiting to be able to connect to a single database instance, not supporting connection reuse, or multiple connection settings for using Read Replicas. Thus, the first target was the main feed generator, which hammered the database with inefficient queries, especially around location filtering. Moving from PHP to Java enabled better 2-tier caching and easy integration with services like Elasticsearch. This process also forced architectural rethinking – moving towards eventual consistency, separating read/write paths, questioning if every piece of data truly needed millisecond accuracy. This focus on scalability – considering server limits, data payloads, simplicity – persists.

Our ambition was to eliminate the last line of PHP by the end of 2018. Reality? It took two years. But crucially, the highest-risk, highest-load components were migrated within the first year, allowing us to sleep better. Less critical parts followed. Importantly, we never had to pause building business features – vital for a startup finding its product-market fit. Later, we even proxied requests through the new system back to the legacy one for low-priority endpoints, allowing incremental migration. The day the old system was finally turned off was definitely a celebration! Could we have done things differently? With 20/20 hindsight, always. But we kept asking ourselves the right questions and kept the business moving forward along the way.

Just as we celebrated that milestone, the business decided that next was to launch in the US. Latency tests were poor. It was clear we needed a multi-datacenter setup. Management wisely insisted on a single global app experience. So, the PHP retirement party became the kickoff for our next major architectural challenge. Taking an application not built for geographic distribution and making it work across oceans was complex, involving routing, data consistency, and key management. Geographic expansion became the main engineering focus for about six months. But we asked ourselves – “does every part of the system need to be global?” This critical question guided our approach to designing a more efficient global architecture. We built an architecture that serves requests locally where possible, only crossing oceans when necessary. We’ve since expanded this to three data centers, including Australia, and the architecture still holds up.

Looking back, both at Endomondo and Too Good To Go, the common thread is that constraints breed creativity and enforce frugality. Limited budgets forced us to be smarter, more focused, and deliberate in our technical choices. This approach taught us valuable lessons that continue to shape how we build technology today:

  1. Question everything: Our “always ask why” philosophy went beyond saving pennies. It became a powerful tool for distilling complex problems down to their essence, often revealing simpler solutions we might have overlooked.
  2. Incremental progress is key: Whether sharding databases at Endomondo or replacing PHP at Too Good To Go, taking measured steps allowed us to improve systems without disrupting the business. Each increment had to deliver tangible value – in stability, scalability, or cost savings.
  3. Embrace “boring” technology: We consistently chose proven, well-understood tools over cutting-edge alternatives. This approach might not be exciting, but it provided the predictability and reliability that growing startups desperately need.
  4. Targeted innovation: When faced with challenges like global expansion, we carefully evaluated which components truly needed new solutions. This targeted approach to innovation helped us manage complexity and costs effectively.

Frugality wasn’t just about saving money; it was about making wise investments in technology that solved real problems and enabled sustainable growth. By staying focused on actual needs rather than chasing the latest trends, we built systems that could scale with the business while keeping costs under control. These experiences have instilled in us a deep appreciation for pragmatic, thoughtful engineering – an approach we continue to apply in our work today.