A Brief History of Our New Digital World

8 min readJul 11, 2023


How did the change from dial-up to broadband internet and the emergence of the cloud change our world forever? How did new economy companies like Uber and Airbnb leverage these while contributing to the creation and bloom of open-source communities? In this article, we explore the history of digitalization in the past two decades, the main milestones, and the technologies that brought us to a new era of innovation, reshaping the way we live, work, and connect.

In the last twenty years, we have seen the world change rapidly. Our phones became miniature computers, we get most of our news from online sites instead of the paper, our cars can park themselves, and we can now get many things done with a few clicks on the Internet. Our lives went through a complete transformation in a mere two decades.

These changes that we have experienced are — in one way or the other — all related to the radical metamorphosis that took place in the world of software and connectivity. In fact, we can say that our current digital world originates from a number of innovations in technology that have happened in this relatively short period since the dawn of the new millennium.

The age of connectivity begins

To explain how it all started, we have to go back to the turn of the century. This was when the first main milestone occurred: the switch from dial-up modems to broadband Internet, which caused a significant shift toward today’s digital economy.

Before 2000, there were only a limited number of connected devices. Then, broadband Internet kicked in, spreading to households in mass numbers within a few years. Around 2003–2004, we experienced an explosion in the number of connected devices. After that, connectivity and the rapid innovation of technology became unstoppable: the first iPhone was unveiled in 2007, and 4G appeared in 2009, just to mention a couple of examples.

Today, household access to broadband is considered a basic utility in many countries; Internet penetration now accounts for approximately 62.5% of the world’s total population. Machine-to-machine (M2M) connectivity and the Internet of Things emerged, and everything is connected at all times.

The birth of the ‘new economy’

There was another important watershed before the turn of the century that we have to mention here: the emergence of the cloud. It became available around 1997–98 in the US, and by 2003, in some regions of Europe as well. It offered an infrastructure of computing, storage, and bandwidth, which previously had only been available to the privileged few. This way, it also provided an opportunity for small companies and start-ups to start leveraging this infrastructure. This opened the door to business opportunities that were different from the usual, and in a few years, a completely new economy started to occur. Tech startups that are now giants began popping up on the scene: LinkedIn and Facebook were launched in 2003, in 2008, Airbnb was founded, and in 2009, Uber followed. They were disruptive and fundamentally different from all the businesses that they were starting to compete with: the big, traditional firms that were running the scene.

The reason why these new economy companies were capable of being born in the first place was the cloud. That is why we also refer to them as ‘cloud-native’: they were born on the cloud.

What these companies had in common were data, customers, and their new subscription models. They all have some form of messaging, chat, or other types of communication function embedded into their services, and their core business propositions are all driven by data and customer insight. These types of services produce a new pattern of data creation and consumption with a requirement to process it instantly. If the number of customers suddenly increases, they still have to be able to provide the same quality and speed in service. Hence, these companies all needed distributed, highly scalable software platforms.

The bloom of open-source software

As back then, there was no such solution available off-the-shelf, the majority of these companies had to start developing new types of software frameworks that fit their needs. As such, software was an elementary characteristic of their success. They, however, were not traditional software licensing houses. So when writing their new architectures, they open-sourced them and shared them with the community hoping they would contribute and improve their software. This resulted in a certain phenomenon that took place around 2011–2012; suddenly, a lot of new distributed scalable software architectures became open-sourced: Kafka came from LinkedIn, Cassandra from Facebook, Spark from Berkeley University (AmpLab), and Jaeger from Uber, just to mention a few.

This created a new open-source community, from which new software houses, like Confluent and Databricks, were born. These new firms started to provide license agreements on these open-source software packages that simply did not exist before.

But, one may ask, why didn’t they exist before? Why now, and why was it not possible for someone to come up with such solutions even a few years before this new economic phenomenon?

The answer lies in the requirements of the old and the new software world and the very different types of data processing capabilities they require. In the former, where you have a limited number of connected devices and online users, you only need a fairly restricted amount of data sharing. The main architectures that were leveraged back then (and still today by some companies) mainly worked on a simple client-server model. With the explosion in connected devices and online users, however, it quickly became clear that these traditional architectures would simply not be sufficient to process these volumes and could inevitably end up in a bottleneck.

With this in mind, it was clear that the new world needed a fundamentally different model for processing the increased amount of data. It needed platforms that were always available, secure, and scalable. Platforms that could provide the same performance and user experience even if there’s a sudden surge in the number of users. Platforms that are created with larger data volumes in mind that fit the needs of today’s connected world.

The disruption becomes the new normal

To enable all this, the traditional client-server architecture was replaced by a so-called publish-subscribe paradigm. Instead of having just one server, there are many brokers in this model, which are replicated and synchronized. Publishers can also be subscribers and vice versa, enabling users to be able to both consume the content and provide feedback, make remarks, and share their own content.

As we often see in our history, change, and innovation — at least in their early stages — tend to face some resistance from those profiting from the traditional ways. They were protecting their existing business and sticking to the old models.

The same cannot be said for the customers, though. They enjoyed the opportunities and advantages the new economy companies offered, and their services got a lot of traction and exploded. As a result, our world as we know it has completely changed, and companies who resisted were challenged in their old business models. They had no choice but to adapt.

Embrace the disruption or fall behind

While innovation may face some resistance initially, market players will begin to adapt if it catches on with the crowd. This was the same in the case of the cloud: with time, large companies also realized its value. Today, storing data in the cloud is perfectly normal; in fact, many organizations seem to think that “moving everything to the cloud” can substitute their need for IT and building IT strategies.

Our fundamental belief is that this is an inappropriate way of leveraging the cloud. Nothing in this world is black and white. All of our projects play in a hybrid setting, where both on-premise and cloud deployments need each other. On-premise, as long as it can drive cloud-native technologies, can live in perfect harmony with a pure cloud environment. Additional geopolitical, data governance, and data sovereignty requirements should be kept in mind when selecting infrastructure partners. Like with platforms, where there is no ‘one solution fits all,’ the same holds true for infrastructure. It’s all about use case types, cost, and security/privacy by design.

Along with scalability, real-time data is also something all companies will have to embrace if they don’t want to fall behind. Because of our current culture of immediate gratification, we are used to being able to access anything in a blink of an eye. This kind of customer attitude dominates the service industry, and the companies that don’t adapt to this will inevitably lose their competitive edge. The service you provide has to work immediately and at all times. If it doesn’t, customers will choose someone else.

To this day, there are still many companies that still need to initiate their digital transformation. The world is moving on, forcing them to become proactive and modernize their legacy IT world that only allows for a reactive approach. It is crucial that they catch up if they want to stay relevant in tomorrow’s digital economy.

About Klarrio

Our company originates from around the time when this whole new world started to emerge. In 2010, the founders of Klarrio were working at Technicolor, where they were given the opportunity to launch their own incubation program called Virdata. They set out to build the first platform as a service (PaaS) for what we now refer to as the Internet of Things. Back then, the phenomenon and the term were both still unknown.

Before 2010 and prior to Virdata, the Klarrio founders experienced the Broadband Always connected era by creating gateway firmware and standardization work on remote device management protocols like TR-69 as part of the connected home division of Thomson Telecom (later known as Technicolor).

With Virdata, our team built a new cloud using the open-source components that became available around that time. We have been working with open-source technologies ever since, building up our experience and expertise of over a decade.

In 2016, we started Klarrio as a software company, leveraging all the knowledge and expertise we accumulated throughout the last twenty years, having lived through these significant changes that have shaken the world to its core. Having worked in those times when unlimited cloud subscriptions were not yet a thing, we were always forced to be efficient when it came to computing, storage, and bandwidth consumption. We reflect on how to write, structure, and test code, with an ongoing quest of carefully picking the best technologies that fit our customer’s projects.

For more information, you can visit our website, www.klarrio.com, or follow us on LinkedIn and Twitter.




Klarrio empowers you with tailor-made, scalable data platforms & microservices for real-time data processing across various cloud & on-premises infrastructures.