Startups

The industrial data revolution: What founders got wrong

Comment

3D illustration Rendering of binary code pattern Abstract background.Futuristic Particles for business,Science and technology background
Image Credits: MR Cole Photographer (opens in a new window) / Getty Images

Joe Hellerstein

Contributor
Joe Hellerstein is co-founder and chief strategy officer of Trifacta and the Jim Gray Chair of Computer Science at UC Berkeley.

In February 2010, The Economist published a report called “Data, data everywhere.” Little did we know then just how simple the data landscape actually was. That is, comparatively speaking, when you consider the data realities we’re facing as we look to 2022.

In that Economist report, I spoke about society entering an “Industrial Revolution of Data,” which kicked off with the excitement around Big Data and continues into our current era of data-driven AI. Many in the field expected this revolution to bring standardization, with more signal and less noise. Instead, we have more noise, but a more powerful signal. That is to say, we have harder data problems with bigger potential business outcomes.

And, we’ve also seen big advances in artificial intelligence. What does that mean for our data world now? Let’s take a look back at where we were.

At the time of that Economist article, I was on leave from UC Berkeley to run a lab for Intel Research in collaboration with the campus. We were focused all the way back then on what we now call the Internet of Things (IoT).

At that time, we were talking about networks of tiny interconnected sensors being embedded in everything — buildings, nature, the paint in the walls. The vision was that we could measure the physical world and capture its reality as data, and we were exploring theories and building devices and systems toward that vision.

We were looking forward. But at that time, most of the popular excitement about data revolved around the rise of the web and search engines. Everybody was talking about the accessibility of masses of digital information in the form of “documents” — human-generated content intended for human consumption.

What we saw over the horizon was an even bigger wave of machine-generated data. That’s one aspect of what I meant by the “industrialization of data” — since data would be stamped out by machines, the volume would go up enormously. And that certainly happened.

The second aspect of the “Industrial Revolution of Data” that I expected was the emergence of standardization. Simply put, if machines are generating things, they’ll generate things in the same form every time, so we should have a much easier time understanding and combining data from myriad sources.

The precedents for standardization were in the classical Industrial Revolution, where there was an incentive for all parties to standardize on shared resources like transportation and shipping as well as on product specifications. It seemed like that should hold for the new Industrial Revolution of Data as well, and economics and other forces would drive standardization of data.

That did not happen at all.

In fact, the opposite happened. We got an enormous increase in “data exhaust” — byproducts of exponentially growing computation in the form of log files — but only a modest increase in standardized data.

And so, instead of having uniform, machine-oriented data, we got a massive increase in the variety of data and data types and a decrease in data governance.

In addition to data exhaust and machine-generated data, we started to have adversarial uses of data. This occurred because the people involved with data had many different incentives for its use.

Consider social media data and the recent conversations around “fake news.” The early 21st century has been a giant experiment in what makes digital information viral, not only for individuals but for brands or political interests looking to reach the masses.

Today, much of that content is in fact machine-generated, but it’s machine-generated for human consumption and human behavioral patterns. This is in contrast to the wide-eyed “by people, for people” web of years ago.

In short, today’s data production industry is incredibly high volume, but it is not tuned for standard data representations, not in the sense I expected at the time of those predictions over a decade ago.

The state of innovation: AI versus human input

One thing that has clearly advanced substantially in the past decade or so is artificial intelligence. This sheer volume of data we are able to access, process and feed into models has changed AI from science fiction into reality in a few short years.

But AI is not as helpful in the business data processing domain as we might expect — at least not yet. There is still a surprising disconnect between AI technology like natural language processing and structured data. Even though we’ve had some progress, for the most part, you can’t talk to your data and expect much back. There are some situations where you can Google for a quantitative question and get back a little table or chart, but that’s only if you ask just the right questions.

For the most part, AI advances are still pretty divorced from stuff like spreadsheets and log files and all these other more quantitative, structured data — including IoT data. It turns out the traditional kinds of data, the kinds of data we’ve always put in databases, has been much harder to crack with AI than consumer applications like image search or simple natural language question answering.

Case in point: I encourage you to try asking Alexa or Siri to clean your data! It’s funny, but not very helpful.

Popular applications of AI haven’t projected back yet to the traditional data industry, but it’s not for lack of trying. Lots of smart people at both universities and companies haven’t been able to crack the nut of traditional record-oriented data integration problems.

Yet, full automation evades the industry. Part of that is because it’s hard for humans to specify what they want out of data upfront. If you could actually say, “Here’s precisely what I’d like you to do with these 700 tables,” and follow up with clear goals, maybe an algorithm could do the task for you. But that’s not actually what happens. Instead, people see 700 tables, wonder what’s in there and start poking around. Only after a lot of poking do they have any clue what they might want to happen to those tables.

The poking around remains creative work because the space of ways to use the data is just so big and the metrics of what success looks like are so varied. You can’t just give the data to optimization algorithms to find the best choice of outcome.

Rather than waiting for full automation from AI, humans should get as much help as they can from AI, but actually retain some agency and identify what is or isn’t useful, then steer the next steps in a certain direction. That requires visualization and a bunch of feedback from the AI.

Understanding the impact of data and controlling data spread

One place AI has really shined, though, is in content recommendation. It turns out that computers are frighteningly effective at targeting and disseminating content. And oh boy, did we underestimate the incentives and impacts around that aspect of data and AI.

Back then, the ethical concerns we had around data and its uses in AI were mostly around privacy. I remember big debates about whether the public library should have digital records of the books you reserve. Similarly, there were controversies over grocery loyalty card programs. Shoppers didn’t want grocery chains to keep track of what food they bought when and target them for accompanying items.

That mentality has largely changed. Today, teenagers share more radically more personal information on social media than the brand of food they purchase.

While I wouldn’t say that digital privacy is in a good state, it is arguably not the worst of our data problems today. There are issues such as state-funded actors trying to introduce mayhem into our social discourse — using data. Twenty years ago, very few people saw this stuff coming our way. I don’t think there was a great sense of the ethical questions of what could go wrong.

This leads to what’s next, and even currently in process, in the evolution of our uses of data. What becomes the role of governments and of well-meaning legislation? Without predicting all the ways tools will be used, it’s hard to know how to govern and restrict them intelligently. Today, we are in a state where it seems like we need to figure out the controls or incentives around data and the way it is promulgated, but the tech is shifting faster than society is able to figure out risks and protections. It’s unsettling, to say the least.

So, were the predictions spot-on?

As a professor, I’d award it a passing grade, but not an A. There is substantially more data available to us with more uses than we probably ever could have imagined. That’s led to incredible advances in AI and machine learning along with analytics, but on many tasks, we’re still just scratching the surface, while on others we’re reaping the whirlwind. I am fascinated to see what the next 10 to 20 years will bring and look back on these issues again.

More TechCrunch

YouTube announced this week the rollout of “Thumbnail Test & Compare,” a new tool for creators to see which thumbnail performs the best. The feature first launched to select creators…

YouTube creators can now test multiple video thumbnails

Waymo has voluntarily issued a software recall to all 672 of its Jaguar I-Pace robotaxis after one of them collided with a telephone pole. This is Waymo’s second recall. The…

Waymo issues second recall after robotaxi hit telephone pole

The hotel guest management technology company’s platform digitizes the hotel guest journey from post-booking through checkout.

Insight Partners backs Canary Technologies’ mission to elevate hotel guest experiences

The TechCrunch team runs down all of the biggest news from the Apple WWDC 2024 keynote in an easy-to-skim digest.

Here’s everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

InScope leverages machine learning and large language models to provide financial reporting and auditing processes for mid-market and enterprises.

Lightspeed Venture Partners leads $4.3M seed in automated financial reporting fintech InScope

Venture fundraising has been a slog over the last few years, even for firms with a strong track record. That’s Foresite Capital’s experience. Despite having 47 IPOs, 28 M&As and…

Foresite Capital raises $900M sixth fund for investing in  life sciences companies

A year ago, Databricks acquired MosaicML for $1.3 billion. Now rebranded as Mosaic AI, the platform has become integral to Databricks’ AI solutions. Today, at the company’s Data + AI…

Databricks expands Mosaic AI to help enterprises build with LLMs

RetailReady targets the $40 billion compliance market to help reduce the number of retail compliance losses that shippers incur annually due to incorrectly shipped packages.

YC grad RetailReady raises $3.3M for an AI warehouse app that hopes to save brands billions

Since its launch in 2013, Databricks has relied on its ecosystem of partners, such as Fivetran, Rudderstack, and dbt, to provide tools for data preparation and loading. But now, at…

Databricks launches LakeFlow to help its customers build their data pipelines

A big shoutout to the early-stage founders who missed the application window for the Startup Battlefield 200 (SB 200) at TechCrunch Disrupt. We have exciting news just for you! You…

Bonus: An extra week to apply to Startup Battlefield 200

When one of the co-creators of the popular open-source stream-processing framework Apache Flink launches a new startup, it’s worth paying attention. Stephan Ewen was among the founding team of the…

Restate raises $7M for its lightweight workflows-as-code platform

With most residential solar panels installed by smaller companies, customer experience can be a mixed bag. To try to address the quality and consistency problem, Civic Renewables is buying small…

Civic Renewables is rolling up residential solar installers to improve quality and grow the market

Small VC firms require deep trust, mutual support, and long-term commitment among the partners —a kinship that, in many ways, resembles a family dynamic. Colin Anderson (Palantir’s ex-CFO and former…

Friends & Family Capital, a fund founded by ex-Palantir CFO and son of IVP’s founder, unveils third $118M fund

Fisker is issuing the first recall for its all-electric Ocean SUV because of problems with the warning lights, according to new information published by the National Highway Traffic Safety Administration.…

Fisker’s troubled Ocean SUV gets its first recall

Gorilla, a Belgian company that serves the energy sector with real-time data and analytics for pricing and forecasting, has raised €23 million ($25 million) in a Series B round led…

Gorilla, a Belgian startup that helps energy providers crunch big data, raises $25M

South Korea’s fabless AI chip industry saw a slew of fundraising events over the last couple of years as demand for hardware to power AI applications skyrocketed, and it seems…

Fabless AI chip makers Rebellions and Sapeon to merge as competition heats up in global AI hardware industry

Here’s a list of third-party apps that were Sherlocked by Apple at this year’s WWDC.

The apps that Apple Sherlocked at WWDC 2024

Black Semiconductor, which is developing a chip-connecting technology based on graphene, has raised $273M in a combination of private and public funding. 

Black Semiconductor nabs $273M in Germany to supercharge how chips work together

Featured Article

Let there be Light! Danish startup exits stealth with $13M seed funding to bring AI to general ledgers

It’s not the sexiest of subject matters, but someone needs to talk about it: The CFO tech stack — software used by the chief financial officers of the world — is ripe for disruption. That’s according to Jonathan Sanders, CEO and co-founder of fledgling Danish startup Light, which exits stealth…

9 hours ago
Let there be Light! Danish startup exits stealth with $13M seed funding to bring AI to general ledgers

Fresh off the success of its first mission, satellite manufacturer Apex has closed $95 million in new capital to scale its operations.  The Los Angeles-based startup successfully launched and commissioned…

Apex’s off-the-shelf satellite bus business attracts $95M in new funding

After educating the D.C. market, YC aims to leverage its influence, particularly in areas like competition policy.

DC’s political class doesn’t know Y Combinator exists — yet

Lina Khan says the FTC wants to be effective in its enforcement strategy, which is why it has been taking on lawsuits that “go up against some of the big…

FTC Chair Lina Khan tells TechCrunch the agency is pursuing the ‘mob bosses’ in Big Tech

With dozens of antitrust cases and close to a hundred on the consumer protection side, the agency is now turning to innovative tactics to help it fight fraud, particularly in…

FTC Chair Lina Khan shares how the agency is looking at AI

The ability to pause your activity rings is a minor feature update for most, but for those of us who obsess about such things to an unhealthy degree, it’s the…

Apple Watch is finally adding a feature I’ve been requesting for years

Featured Article

Why Apple is taking a small-model approach to generative AI

It’s a very Apple approach in the sense that it prioritizes a frictionless user experience above all.

17 hours ago
Why Apple is taking a small-model approach to generative AI

When generative AI tools started making waves in late 2022 after the launch of ChatGPT, the finance industry was one of the first to recognize these tools’ potential for speeding…

Linq raises $6.6M to use AI to make research easier for financial analysts

In addition to the federal funding, the state of New Mexico — where SolAero is based — committed to providing financing and incentives that value $25.5 million.

Biden administration looks to give Rocket Lab $24M to boost space-grade solar cell production

Some of the new Apple Intelligence features that Apple debuted at WWDC 2024 don’t even feel like AI, they just feel like smarter tools. 

Apple’s AI, Apple Intelligence, is boring and practical — that’s why it works

Jordan Meyer and Mathew Dryhurst founded Spawning AI to create tools that help artists exert more control over how their works are used online. Their latest project, called Source.Plus, is…

Spawning wants to build more ethical AI training datasets

After leading the social media landscape, TikTok appears to be interested in challenging Google’s dominance in search. The company confirmed to TechCrunch that it’s testing the ability for users to…

TikTok comes for Google as it quietly rolls out image search capabilities in TikTok Shop