8LS Logo
GitHub

Cloud, Edge, or Bare Metal? Making the Right DevOps Decisions

Background for Play button

In the world of DevOps, the choice of infrastructure—whether it’s cloud, edge, or bare metal—can significantly impact the success of a project. Each environment has its strengths and weaknesses, and knowing when to use each is a critical skill for any DevOps engineer. My own experiences through these environments have taught me valuable lessons about the importance of flexibility, adaptability, and making informed decisions. This blog post aims to share those lessons, providing insights into how to navigate the complex landscape of modern infrastructure.

The Early Days: Experimenting with the Cloud

My first experiences with what we now call “the cloud” began when I was just 12 years old. Of course, back then, the term “cloud computing” wasn’t in use, but the concepts were already there. According to the NIST definition, cloud computing is characterized by on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Even in 2002, I was able to leverage some of these principles through services that, in hindsight, were early forms of cloud computing.

One of my first forays into the cloud-like environment was using a web hosting service called upage.de, run by a cousin of an old friend. This startup offered me the opportunity to host my website, a significant achievement for a 12-year-old. Although the platform looks outdated now, it provided me with a glimpse into the potential of web-based services long before the term “cloud” became mainstream.

The service offered by upage.de didn’t include a database or PHP capabilities in its free tier, which led me to explore other options. I found db4free.net for database hosting and a free PHP server on the ad-based offer http://funpic.de (not available anymore). These services allowed me to cobble together a functional web presence, even if it meant users had to endure occasional pop-up ads. While these early experiments were rudimentary, they laid the foundation for my understanding of cloud services and the flexibility they offer.

Between 2007 and 2012, I took a break from web development to focus on other interests, including enjoying life in Berlin and transitioning from a Windows user to a Mac user—a decision that would later prove advantageous in my development career. The cloud gained significant traction during this period, with services like Dropbox going viral and the term “cloud” becoming a buzzword. Amazon AWS was gaining popularity, making it possible to not just host web servers but also entire infrastructures (IaaS) and development platforms (PaaS) in the cloud.

A Coding Break - Brief Detour

After my cloud experiments, I found myself moving away from development, focusing instead on my career at Daimler Financial Services, where I managed Excel sheets rather than code. However, my passion for technology never waned, and in 2014, I started working in the Manufacturing Execution System (MES) Department at Mercedes-Benz Passenger Cars. My role as a project manager and system responsible (or what we might now call a technical product owner) involved intense integration tests with each release.

At the same time, I started pitching innovation ideas for the company, participating in hackathons and other competitions. Although I enjoyed the developer experience on my Mac, I quickly realized that my skills were rusty, and I lacked the depth of knowledge needed to excel in these competitions. However, my innate curiosity and willingness to learn kept me motivated.

Whenever I found myself with free time—especially during business trips to China—I dedicated my evenings and taxi rides to refreshing my technical skills. This led me back to Python and a framework called Django, which is used to build database-driven web applications. Django is a full-stack development framework similar to Java Spring Boot, and it was incredibly easy to get started with it on my MacBook. The command line proficiency I developed during this period was crucial, as most Python and Django tutorials required a good understanding of the terminal.

On-Premise: Developing Locally and Scaling Up

The transition from cloud-based development to on-premise work was a natural progression. The easiest way to start on-premise development is to work on your own computer, whether it’s a Mac, Linux, or Windows PC. On-premise simply means that you own or rent the hardware on which your applications run. When I began developing a tool to manage data artifacts for the Mercedes-Benz MES, I used Django on my Mac. This setup worked well, but when I tried to replicate it on my Windows company laptop, I encountered several challenges.

Installing Python and the necessary packages on a corporate Windows machine was far more difficult than on my personal Mac. Firewalls, proxy servers, and administrative privileges created numerous obstacles. However, with the help of colleagues from IT infrastructure, I eventually managed to get everything running smoothly. This experience taught me the importance of understanding the nuances of different operating systems and the value of a supportive network within a large organization.

One of the most significant challenges I faced was scaling my solution. To simulate Siemens PLCs at scale, I needed to deploy my Django stack on a Linux server. This seemed nearly impossible with my existing knowledge, but a colleague introduced me to Docker. Docker is a containerization platform that allows you to package your application and its dependencies into a portable container that can run on any machine, regardless of the underlying operating system. This was a game-changer for me.

I became proficient with Docker, running containers on my laptop and transferring them to target machines with ease. This new skill even led to me sporting a Docker sweater in the office, a testament to how deeply I had embraced this technology. Docker not only simplified my development process but also paved the way for more complex projects, such as building a Kubernetes cluster on Raspberry Pis. I even hosted a remote seminar for my Codepals meetup group in Beijing:

Raspberry Pis are small, affordable computers that are perfect for experimenting with “bare metal” systems. Bare metal refers to physical hardware on which you install an operating system, rather than a virtual machine running on a server somewhere. Working with Raspberry Pis allowed me to gain hands-on experience with Linux systems, learning how to set them up without the convenience of a desktop environment.

Back to the Cloud: Scaling Beyond On-Premise

In 2019, I was promoted and relocated to China for an expatriate assignment. This presented an opportunity to dive back into cloud computing, this time with a platform called Tencent Cloud. Tencent Cloud is one of the biggest alternatives to Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, especially in China, where these Western services face regulatory challenges.

Getting started with Tencent Cloud was straightforward. I spun up a virtual machine (VM), selecting the CPU, RAM, and storage, and began experimenting. The pricing was reasonable, starting around €10 per month, with options to limit the maximum cost to avoid surprises. While I had previously experimented with AWS and GCP, I found Tencent’s user interface to be the most intuitive for my needs at the time.

However, my first attempt at hosting a website in China quickly ran into regulatory issues. The website was blocked because I hadn’t registered it with the federal office for cybersecurity. I learned that hosting in Hong Kong avoided these restrictions, so I migrated my setup there. This second attempt worked for two years, with the website accessible both from within China and by friends in Germany.

My experience with IaaS on Tencent Cloud taught me valuable lessons about the complexities of managing infrastructure in the cloud. Running a VM requires you to handle operating system updates, security hardening, and even reverse proxy configurations if you need to scale beyond a single server. This process can become overwhelming, especially as the number of components grows and each one requires regular maintenance.

One of the most challenging aspects I encountered was dealing with CORS (Cross-Origin Resource Sharing). CORS is a security feature implemented by browsers to prevent web pages from making requests to a different domain than the one that served the web page. Configuring CORS correctly is crucial for any web application that interacts with APIs or other external services, and it was a learning curve I had to overcome.

Exploring the Edge: Simplifying DevOps

As I continued to develop my skills, I began working on a startup idea: a social network for developers to connect over coffee and exchange advice. However, as I set up multiple Dockerized servers, load balancers, and security measures, I quickly realized that managing all this infrastructure was consuming too much time and energy. This was the classic DevOps dilemma: balancing the need for robust infrastructure with the desire to focus on building the actual product.

A friend and fellow entrepreneur recommended that I explore Platform-as-a-Service (PaaS) offerings instead of managing everything myself. PaaS platforms like Firebase, Supabase, and Cloudflare Workers offer a way to abstract away much of the infrastructure complexity, allowing developers to focus on building their applications. Cloudflare Pages, for example, allows you to host your frontend code with minimal setup, while Cloudflare Workers can handle backend processing.

One of the most compelling aspects of Cloudflare’s offerings is their distributed infrastructure. Cloudflare is one of the largest Content Delivery Networks (CDNs) globally, and their core service is caching content as close as possible to the end user. This infrastructure not only improves performance but also enhances security and resilience against cyberattacks.

By leveraging Cloudflare Workers and Pages, I was able to offload much of the infrastructure burden, allowing me to focus on developing the core features of my application. This approach is often referred to as “edge computing,” where computation is performed as close as possible to the data source or end user, reducing latency and improving performance.

In the broader context of DevOps, edge computing represents a shift towards more distributed architectures. As more business applications move to SaaS (Software as a Service) models, edge computing offers a way to ensure reliability and performance even in the face of network outages or cloud service downtimes. For many companies, the “edge” is becoming synonymous with a backup cloud infrastructure that runs on-premise or in local data centers, providing an additional layer of resilience.

Conclusion: Making the Right DevOps Decisions

As I reflect on my experiences across cloud, on-premise, and edge computing, one thing becomes clear: there is no one-size-fits-all solution. Each environment has its own strengths and weaknesses, and the right choice depends on the specific needs of your project. Whether you’re scaling up a web application, managing data artifacts for a large enterprise, or building a new product from scratch, understanding the trade-offs between cloud, edge, and bare metal is crucial.

In DevOps, it’s essential to remain flexible and adaptable, avoiding the trap of getting stuck in a particular technology or framework. The key to success lies in being able to scale up and down as needed, leveraging the right tools for the job, and continuously learning and evolving with the technology landscape.

If there’s one lesson I’ve learned, it’s that DevOps is not about labels or buzzwords. It’s about making informed decisions that align with your project’s goals, whether that means deploying in the cloud, running on the edge, or managing bare metal servers. And sometimes, the best decision is to step back and simplify, focusing on what truly matters: delivering value to your users.

In conclusion, my experiences with cloud, edge, and bare metal infrastructures have shaped my approach to DevOps. Whether you’re just starting out or are a seasoned professional, I encourage you to explore all the options available, understand their implications, and choose the path that best aligns with your project’s needs. And remember, the best infrastructure is the one that helps you achieve your goals efficiently and effectively.

If you’re looking for a DevOps engineer with experience across cloud, edge, and bare metal environments, or if you need help making the right infrastructure decisions for your project, I’m here to help. Let’s connect and discuss how we can work together to achieve your goals.