Startups

How to run data on Kubernetes: 6 starting principles

Comment

Containers in the cloud; kubernetes
Image Credits: SerrNovik (opens in a new window) / Getty Images

Sylvain Kalache

Contributor

Sylvain Kalache is the co-founder of Holberton, an edtech company training digital talent in more than 10 countries. An entrepreneur and software engineer, he has worked in the tech industry for more than a decade. Part of the team that led SlideShare to be acquired by LinkedIn, he has written for CIO and VentureBeat.

More posts from Sylvain Kalache

Kubernetes is fast becoming an industry standard, with up to 94% of organizations deploying their services and applications on the container orchestration platform, per a survey. One of the key reasons companies deploy on Kubernetes is standardization, which lets advanced users double productivity gains.

Standardizing on Kubernetes gives organizations the ability to deploy any workload, anywhere. But there was a missing piece: The technology assumed that workloads were ephemeral, meaning that only stateless workloads could be safely deployed on Kubernetes. However, the community recently changed the paradigm and brought features such as StatefulSets and Storage Classes, which make using data on Kubernetes possible.

While running stateful workloads on Kubernetes is possible, it is still challenging. In this article, I provide ways to make it happen and why it is worth it.

Do it progressively

Kubernetes is on its way to being as popular as Linux and the de facto way of running any application, anywhere, in a distributed fashion. Using Kubernetes involves learning a lot of technical concepts and vocabulary. For instance, newcomers might struggle with the many Kubernetes logical units such as containers, pods, nodes and clusters.

If you are not running Kubernetes in production yet, don’t jump directly into data workloads. Instead, start with moving stateless applications to avoid losing data when things go sideways.

Understand the limitations and specificities

Once you are familiar with general Kubernetes concepts, dive into the specifics for stateful concepts. For example, because applications may have different storage needs, such as performance or capacity requirements, you must provide the correct underlying storage system.

What the industry generally calls storage “profiles” is termed Storage Classes in Kubernetes. They provide a way to describe the different types of classes a Kubernetes cluster can access. Storage classes can have different quality-of-service levels, such as I/O operations per second per GiB, backup policies or arbitrary policies such as binding modes and allowed topologies.

Another critical component to understand is StatefulSet. It is the Kubernetes API object used to manage stateful applications and offers key features such as:

  • Stable, unique network identifiers that let you keep track of volume, and allows you to detach and reattach them as you please.
  • Stable, persistent storage so that your data is safe.
  • Ordered, graceful deployment and scaling, which is required for many Day 2 operations.

While StatefulSet has been a successful replacement for the infamous PetSet (now deprecated), it is still imperfect and has limitations. For example, the StatefulSet controller has no built-in support for volume (PVC) resizing — which is a major challenge if the size of your application dataset is about to grow above the current allocated storage capacity. There are workarounds, but such limitations must be understood well ahead of time so that the engineering team knows how to handle them.

Come up with a plan

Once you are comfortable with Kubernetes stateful concepts, you can progressively migrate your data workloads in a specific order. This allows you to learn from your mistakes and avoid being overwhelmed, because not all data technologies are equally easy to run on Kubernetes.

Established technologies, such as databases and storage, should be migrated first, and emerging tech, such as AI and ML, should be done last. This is reflected in a recent report, which found database and persistent storage are the two most-run data workloads on Kubernetes. The main reason is the lack of tooling for Day 2 operations. We will explore this in the next section.

Check for operator availability

Moving your stateful workloads to Kubernetes is only half the job — also known as Day 1. Now you need to handle Day 2 operations (one of the most discussed topics at the last KubeCon). This is where things get tricky. There are tons of Day 2 operations that Kubernetes cannot handle natively such as patching and upgrading, backup and recovery, log processing, monitoring, scaling and tuning.

All these operations are application specific. For example, a PostgreSQL and MySQL cluster will require two completely different approaches when picking a new primary server in an HA cluster configuration. Kubernetes cannot possibly know all the application’s specific Day 2 operations. This is where operators come in.

Operators are programmable extensions that perform operations that Kubernetes cannot handle natively. Operators provide intelligent, dynamic management capabilities by extending the functionality of the Kubernetes API. One of the most common uses is conducting these Day 2 operations. These operators aren’t developed by the Kubernetes maintainers but by third-party developers and organizations.

Before moving a data workload to Kubernetes, make sure there is an operator for it. OperatorHub does a great job of indexing them. With 282 operators available on the site, the distribution echoes what we discussed earlier: Some workloads have supporting tools, and some don’t. For example, the database category has 38 operators — there are eight for PostgreSQL alone — while the entire ML/AI category only has seven.

Pick the right level of operator capability

Having an operator for your technology isn’t enough, because they can have different capabilities and often exist at various levels of maturity. The OperatorFramework suggests a capability model that categorizes operators according to their features:

  • Level 1: Works for basic installation, such as automated application provisioning and configuration management.
  • Level 2: Supports seamless upgrades, patches and minor version upgrades.
  • Level 3: Handles the full app and storage lifecycle (backup, failure recovery, etc.).
  • Level 4: Provides deep insights, metrics, alerts, log processing and workload analysis.
  • Level 5: Offers automatic horizontal/vertical scaling, auto-config tuning, abnormality detection and scheduling tuning.

When choosing an operator, make sure its capabilities match your needs. If you are unsure which level is right for you, the Data on Kubernetes Report 2022 found that most organizations are looking for operators that are at least at Level 3. Having a backup for your stateful workloads sounds like a good idea.

If you can’t find an operator that matches your needs, don’t worry because most of them are open source. You can extend existing operators’ capabilities with internal development or, even better, contribute to the open source project.

Understand the operator

Operators’ extensibility is their strength, but it’s also their weakness. The lack of standards means they are programmed differently, so you must look at their config files to pick the format you like best.

What’s more, operators may use different technical routes to achieve the same goal. For example, one of the eight PostgreSQL operators, CloudNativePG, does not use StatefulSets, and instead uses its own custom controller. That’s quite unexpected considering that StatefulSets is the foundation for stateful workloads on Kubernetes.

Its developers decided to go with this design because of the inability of StatefulSet to resize PVCs (as we discussed earlier). As the operator documentation explains, picking “different [design directions] lead to other compromises.” So when picking an operator, be sure to understand its implementation and trade-offs, and go with the one you are the most comfortable with.

It’s worth the effort

As you can see, running data on Kubernetes isn’t always easy, but the good news is that it’s worth the hard work: 54% of surveyed organizations attributed more than 10% of their revenue to the fact that they run data on Kubernetes. What’s more, 33% said it has a transformative impact on productivity and another 51% saw a significant positive impact.

As organizations increasingly adopt multicloud infrastructure to optimize their cost and infrastructure performance, Kubernetes has become the tool of choice. With an estimated 66% of countries having some sort of data privacy and consumer rights legislation, which often requires enforcing data sovereignty, companies must increasingly host user data in the countries they operate in. Kubernetes is here to stay.

More TechCrunch

Google’s newest startup program, announced on Wednesday, aims to bring AI technology to the public sector. The newly launched “Google for Startups AI Academy: American Infrastructure” will offer participants hands-on…

Google’s new startup program focuses on bringing AI to public infrastructure

eBay’s newest AI feature allows sellers to replace image backgrounds with AI-generated backdrops. The tool is now available for iOS users in the U.S., U.K., and Germany. It’ll gradually roll…

eBay debuts AI-powered background tool to enhance product images

If you’re anything like me, you’ve tried every to-do list app and productivity system, only to find yourself giving up sooner than later because sooner than later, managing your productivity…

Hoop uses AI to automatically manage your to-do list

Asana is using its work graph to train LLMs with the goal of creating AI assistants that work alongside human employees in company workflows.

Asana introduces ‘AI teammates’ designed to work alongside human employees

Taloflow, an early stage startup changing the way companies evaluate and select software, has raised $1.3M in a seed round.

Taloflow puts AI to work on software vendor selection to reduce cost and save time

The startup is hoping its durable filters can make metals refining and battery recycling more efficient, too.

SiTration uses silicon wafers to reclaim critical minerals from mining waste

Spun out of Bosch, Dive wants to change how manufacturers use computer simulations by both using modern mathematical approaches and cloud computing.

Dive goes cloud-native for its computational fluid dynamics simulation service

The tension between incumbents and fintechs has existed for decades. But every once in a while, the two groups decide to put their competition aside and work together. In an…

When foes become friends: Capital One partners with fintech giants Stripe, Adyen to prevent fraud

After growing 500% year-over-year in the past year, Understory is now launching a product focused on the renewable energy sector.

Insurance provider Understory gets into renewable energy following $15M Series A

Ashkenazi will start her new role at Google’s parent company on July 31, after 23 years at Eli Lilly.

Alphabet’s brings on Eli Lilly’s Anat Ashkenazi as CFO

Tobiko aims to reimagine how teams work with data by offering a dbt-compatible data transformation platform.

With $21.8M in funding, Tobiko aims to build a modern data platform

In 1816, French physician René Laennec invented an instrument that allowed doctors to listen to human hearts and lungs. That device — a stethoscope — eventually evolved from a simple…

Eko Health scores $41M to detect heart and lung disease earlier and more accurately

The number of satellites on low Earth orbit is poised to explode over the coming years as more mega-constellations come online, and it will create new opportunities for bad actors…

DARPA and Slingshot build system to detect ‘wolf in sheep’s clothing’ adversary satellites

SAP sees WalkMe’s focus on automating contextual, in-app support as bringing value to its own enterprise customers.

SAP to acquire digital adoption platform WalkMe for $1.5B

The National Democratic Alliance (NDA) has emerged victorious in India’s 2024 general election, but with a smaller majority compared to 2019. According to post-election analysis by Goldman Sachs, JP Morgan,…

Modi-led coalition’s election win signals policy continuity in India – but also spending cuts

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the…

18 hours ago
A comprehensive list of 2024 tech layoffs

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

19 hours ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

We just announced the breakout session winners last week. Now meet the roundtable sessions that really “rounded” out the competition for this year’s Disrupt 2024 audience choice program. With five…

The votes are in: Meet the Disrupt 2024 audience choice roundtable winners

The malicious attack appears to have involved malware transmitted through TikTok’s DMs.

TikTok acknowledges exploit targeting high-profile accounts

It’s unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issues or internet-scale problem.

AI apocalypse? ChatGPT, Claude and Perplexity all went down at the same time

Welcome to TechCrunch Fintech! This week, we’re looking at LoanSnap’s woes, Nubank’s and Monzo’s positive milestones, a plethora of fintech fundraises and more! To get a roundup of TechCrunch’s biggest…

A look at LoanSnap’s troubles and which neobanks are having a moment

Databricks, the analytics and AI giant, has acquired data management company Tabular for an undisclosed sum. (CNBC reports that Databricks paid over $1 billion.) According to Tabular co-founder Ryan Blue,…

Databricks acquires Tabular to build a common data lakehouse standard

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

The next few weeks could be pivotal for Worldcoin, the controversial eyeball-scanning crypto venture co-founded by OpenAI’s Sam Altman, whose operations remain almost entirely shuttered in the European Union following…

Worldcoin faces pivotal EU privacy decision within weeks

OpenAI’s chatbot ChatGPT has been down for several users across the globe for the last few hours.

OpenAI fixes the issue that caused ChatGPT outage for several hours

True Fit, the AI-powered size-and-fit personalization tool, has offered its size recommendation solution to thousands of retailers for nearly 20 years. Now, the company is venturing into the generative AI…

True Fit leverages generative AI to help online shoppers find clothes that fit

Audio streaming service TuneIn is teaming up with Discord to bring free live radio to the platform. This is TuneIn’s first collaboration with a social platform and one that is…

Discord and TuneIn partner to bring live radio to the social platform

The early victors in the AI gold rush are selling the picks and shovels needed to develop and apply artificial intelligence. Just take a look at data-labeling startup Scale AI…

Scale AI founder Alexandr Wang is coming to Disrupt 2024

Try to imagine the number of parts that go into making a rocket engine. Now imagine requesting and comparing quotes for each of those parts, getting approvals to purchase the…

Engineer brothers found Forge to modernize hardware procurement

Raspberry Pi has released a $70 AI extension kit with a neural network inference accelerator that can be used for local inferencing, for the Raspberry Pi 5.

Raspberry Pi partners with Hailo for its AI extension kit