Startups

The industrial data revolution: What founders got wrong

Comment

3D illustration Rendering of binary code pattern Abstract background.Futuristic Particles for business,Science and technology background
Image Credits: MR Cole Photographer (opens in a new window) / Getty Images

Joe Hellerstein

Contributor

Joe Hellerstein is co-founder and chief strategy officer of Trifacta and the Jim Gray Chair of Computer Science at UC Berkeley.

In February 2010, The Economist published a report called “Data, data everywhere.” Little did we know then just how simple the data landscape actually was. That is, comparatively speaking, when you consider the data realities we’re facing as we look to 2022.

In that Economist report, I spoke about society entering an “Industrial Revolution of Data,” which kicked off with the excitement around Big Data and continues into our current era of data-driven AI. Many in the field expected this revolution to bring standardization, with more signal and less noise. Instead, we have more noise, but a more powerful signal. That is to say, we have harder data problems with bigger potential business outcomes.

And, we’ve also seen big advances in artificial intelligence. What does that mean for our data world now? Let’s take a look back at where we were.

At the time of that Economist article, I was on leave from UC Berkeley to run a lab for Intel Research in collaboration with the campus. We were focused all the way back then on what we now call the Internet of Things (IoT).

At that time, we were talking about networks of tiny interconnected sensors being embedded in everything — buildings, nature, the paint in the walls. The vision was that we could measure the physical world and capture its reality as data, and we were exploring theories and building devices and systems toward that vision.

We were looking forward. But at that time, most of the popular excitement about data revolved around the rise of the web and search engines. Everybody was talking about the accessibility of masses of digital information in the form of “documents” — human-generated content intended for human consumption.

What we saw over the horizon was an even bigger wave of machine-generated data. That’s one aspect of what I meant by the “industrialization of data” — since data would be stamped out by machines, the volume would go up enormously. And that certainly happened.

The second aspect of the “Industrial Revolution of Data” that I expected was the emergence of standardization. Simply put, if machines are generating things, they’ll generate things in the same form every time, so we should have a much easier time understanding and combining data from myriad sources.

The precedents for standardization were in the classical Industrial Revolution, where there was an incentive for all parties to standardize on shared resources like transportation and shipping as well as on product specifications. It seemed like that should hold for the new Industrial Revolution of Data as well, and economics and other forces would drive standardization of data.

That did not happen at all.

In fact, the opposite happened. We got an enormous increase in “data exhaust” — byproducts of exponentially growing computation in the form of log files — but only a modest increase in standardized data.

And so, instead of having uniform, machine-oriented data, we got a massive increase in the variety of data and data types and a decrease in data governance.

In addition to data exhaust and machine-generated data, we started to have adversarial uses of data. This occurred because the people involved with data had many different incentives for its use.

Consider social media data and the recent conversations around “fake news.” The early 21st century has been a giant experiment in what makes digital information viral, not only for individuals but for brands or political interests looking to reach the masses.

Today, much of that content is in fact machine-generated, but it’s machine-generated for human consumption and human behavioral patterns. This is in contrast to the wide-eyed “by people, for people” web of years ago.

In short, today’s data production industry is incredibly high volume, but it is not tuned for standard data representations, not in the sense I expected at the time of those predictions over a decade ago.

The state of innovation: AI versus human input

One thing that has clearly advanced substantially in the past decade or so is artificial intelligence. This sheer volume of data we are able to access, process and feed into models has changed AI from science fiction into reality in a few short years.

But AI is not as helpful in the business data processing domain as we might expect — at least not yet. There is still a surprising disconnect between AI technology like natural language processing and structured data. Even though we’ve had some progress, for the most part, you can’t talk to your data and expect much back. There are some situations where you can Google for a quantitative question and get back a little table or chart, but that’s only if you ask just the right questions.

For the most part, AI advances are still pretty divorced from stuff like spreadsheets and log files and all these other more quantitative, structured data — including IoT data. It turns out the traditional kinds of data, the kinds of data we’ve always put in databases, has been much harder to crack with AI than consumer applications like image search or simple natural language question answering.

Case in point: I encourage you to try asking Alexa or Siri to clean your data! It’s funny, but not very helpful.

Popular applications of AI haven’t projected back yet to the traditional data industry, but it’s not for lack of trying. Lots of smart people at both universities and companies haven’t been able to crack the nut of traditional record-oriented data integration problems.

Yet, full automation evades the industry. Part of that is because it’s hard for humans to specify what they want out of data upfront. If you could actually say, “Here’s precisely what I’d like you to do with these 700 tables,” and follow up with clear goals, maybe an algorithm could do the task for you. But that’s not actually what happens. Instead, people see 700 tables, wonder what’s in there and start poking around. Only after a lot of poking do they have any clue what they might want to happen to those tables.

The poking around remains creative work because the space of ways to use the data is just so big and the metrics of what success looks like are so varied. You can’t just give the data to optimization algorithms to find the best choice of outcome.

Rather than waiting for full automation from AI, humans should get as much help as they can from AI, but actually retain some agency and identify what is or isn’t useful, then steer the next steps in a certain direction. That requires visualization and a bunch of feedback from the AI.

Understanding the impact of data and controlling data spread

One place AI has really shined, though, is in content recommendation. It turns out that computers are frighteningly effective at targeting and disseminating content. And oh boy, did we underestimate the incentives and impacts around that aspect of data and AI.

Back then, the ethical concerns we had around data and its uses in AI were mostly around privacy. I remember big debates about whether the public library should have digital records of the books you reserve. Similarly, there were controversies over grocery loyalty card programs. Shoppers didn’t want grocery chains to keep track of what food they bought when and target them for accompanying items.

That mentality has largely changed. Today, teenagers share more radically more personal information on social media than the brand of food they purchase.

While I wouldn’t say that digital privacy is in a good state, it is arguably not the worst of our data problems today. There are issues such as state-funded actors trying to introduce mayhem into our social discourse — using data. Twenty years ago, very few people saw this stuff coming our way. I don’t think there was a great sense of the ethical questions of what could go wrong.

This leads to what’s next, and even currently in process, in the evolution of our uses of data. What becomes the role of governments and of well-meaning legislation? Without predicting all the ways tools will be used, it’s hard to know how to govern and restrict them intelligently. Today, we are in a state where it seems like we need to figure out the controls or incentives around data and the way it is promulgated, but the tech is shifting faster than society is able to figure out risks and protections. It’s unsettling, to say the least.

So, were the predictions spot-on?

As a professor, I’d award it a passing grade, but not an A. There is substantially more data available to us with more uses than we probably ever could have imagined. That’s led to incredible advances in AI and machine learning along with analytics, but on many tasks, we’re still just scratching the surface, while on others we’re reaping the whirlwind. I am fascinated to see what the next 10 to 20 years will bring and look back on these issues again.

More TechCrunch

Consumer protection groups around the European Union have filed coordinated complaints against Temu, accusing the Chinese-owned ultra low-cost e-commerce platform of a raft of breaches related to the bloc’s Digital…

Temu accused of breaching EU’s DSA in bundle of consumer complaints

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

The AI industry moves faster than the rest of the technology sector, which means it outpaces the federal government by several orders of magnitude.

Senate study proposes ‘at least’ $32B yearly for AI programs

The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers…

FBI seizes hacking forum BreachForums — again

The announcement signifies a significant shake-up in the streaming giant’s advertising approach.

Netflix to take on Google and Amazon by building its own ad server

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than…

The top AI announcements from Google I/O

Uber is taking a shuttle product it developed for commuters in India and Egypt and converting it for an American audience. The ride-hail and delivery giant announced Wednesday at its…

Uber has a new way to solve the concert traffic problem

Google is preparing to launch a new system to help address the problem of malware on Android. Its new live threat detection service leverages Google Play Protect’s on-device AI to…

Google takes aim at Android malware with an AI-powered live threat detection service

Users will be able to access the AR content by first searching for a location in Google Maps.

Google Maps is getting geospatial AR content later this year

The heat pump startup unveiled its first products and revealed details about performance, pricing and availability.

Quilt heat pump sports sleek design from veterans of Apple, Tesla and Nest

The space is available from the launcher and can be locked as a second layer of authentication.

Google’s new Private Space feature is like Incognito Mode for Android

Gemini, the company’s family of generative AI models, will enhance the smart TV operating system so it can generate descriptions for movies and TV shows.

Google TV to launch AI-generated movie descriptions

When triggered, the AI-powered feature will automatically lock the device down.

Android’s new Theft Detection Lock helps deter smartphone snatch and grabs

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions.

Google adds live threat detection and screen-sharing protection to Android

This latest release, one of many announcements from the Google I/O 2024 developer conference, focuses on improved battery life and other performance improvements, like more efficient workout tracking.

Wear OS 5 hits developer preview, offering better battery life

For years, Sammy Faycurry has been hearing from his registered dietitian (RD) mom and sister about how poorly many Americans eat and their struggles with delivering nutritional counseling. Although nearly…

Dietitian startup Fay has been booming from Ozempic patients and emerges from stealth with $25M from General Catalyst, Forerunner

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge toward the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups