Startups

The industrial data revolution: What founders got wrong

Comment

3D illustration Rendering of binary code pattern Abstract background.Futuristic Particles for business,Science and technology background
Image Credits: MR Cole Photographer (opens in a new window) / Getty Images

Joe Hellerstein

Contributor
Joe Hellerstein is co-founder and chief strategy officer of Trifacta and the Jim Gray Chair of Computer Science at UC Berkeley.

In February 2010, The Economist published a report called “Data, data everywhere.” Little did we know then just how simple the data landscape actually was. That is, comparatively speaking, when you consider the data realities we’re facing as we look to 2022.

In that Economist report, I spoke about society entering an “Industrial Revolution of Data,” which kicked off with the excitement around Big Data and continues into our current era of data-driven AI. Many in the field expected this revolution to bring standardization, with more signal and less noise. Instead, we have more noise, but a more powerful signal. That is to say, we have harder data problems with bigger potential business outcomes.

And, we’ve also seen big advances in artificial intelligence. What does that mean for our data world now? Let’s take a look back at where we were.

At the time of that Economist article, I was on leave from UC Berkeley to run a lab for Intel Research in collaboration with the campus. We were focused all the way back then on what we now call the Internet of Things (IoT).

At that time, we were talking about networks of tiny interconnected sensors being embedded in everything — buildings, nature, the paint in the walls. The vision was that we could measure the physical world and capture its reality as data, and we were exploring theories and building devices and systems toward that vision.

We were looking forward. But at that time, most of the popular excitement about data revolved around the rise of the web and search engines. Everybody was talking about the accessibility of masses of digital information in the form of “documents” — human-generated content intended for human consumption.

What we saw over the horizon was an even bigger wave of machine-generated data. That’s one aspect of what I meant by the “industrialization of data” — since data would be stamped out by machines, the volume would go up enormously. And that certainly happened.

The second aspect of the “Industrial Revolution of Data” that I expected was the emergence of standardization. Simply put, if machines are generating things, they’ll generate things in the same form every time, so we should have a much easier time understanding and combining data from myriad sources.

The precedents for standardization were in the classical Industrial Revolution, where there was an incentive for all parties to standardize on shared resources like transportation and shipping as well as on product specifications. It seemed like that should hold for the new Industrial Revolution of Data as well, and economics and other forces would drive standardization of data.

That did not happen at all.

In fact, the opposite happened. We got an enormous increase in “data exhaust” — byproducts of exponentially growing computation in the form of log files — but only a modest increase in standardized data.

And so, instead of having uniform, machine-oriented data, we got a massive increase in the variety of data and data types and a decrease in data governance.

In addition to data exhaust and machine-generated data, we started to have adversarial uses of data. This occurred because the people involved with data had many different incentives for its use.

Consider social media data and the recent conversations around “fake news.” The early 21st century has been a giant experiment in what makes digital information viral, not only for individuals but for brands or political interests looking to reach the masses.

Today, much of that content is in fact machine-generated, but it’s machine-generated for human consumption and human behavioral patterns. This is in contrast to the wide-eyed “by people, for people” web of years ago.

In short, today’s data production industry is incredibly high volume, but it is not tuned for standard data representations, not in the sense I expected at the time of those predictions over a decade ago.

The state of innovation: AI versus human input

One thing that has clearly advanced substantially in the past decade or so is artificial intelligence. This sheer volume of data we are able to access, process and feed into models has changed AI from science fiction into reality in a few short years.

But AI is not as helpful in the business data processing domain as we might expect — at least not yet. There is still a surprising disconnect between AI technology like natural language processing and structured data. Even though we’ve had some progress, for the most part, you can’t talk to your data and expect much back. There are some situations where you can Google for a quantitative question and get back a little table or chart, but that’s only if you ask just the right questions.

For the most part, AI advances are still pretty divorced from stuff like spreadsheets and log files and all these other more quantitative, structured data — including IoT data. It turns out the traditional kinds of data, the kinds of data we’ve always put in databases, has been much harder to crack with AI than consumer applications like image search or simple natural language question answering.

Case in point: I encourage you to try asking Alexa or Siri to clean your data! It’s funny, but not very helpful.

Popular applications of AI haven’t projected back yet to the traditional data industry, but it’s not for lack of trying. Lots of smart people at both universities and companies haven’t been able to crack the nut of traditional record-oriented data integration problems.

Yet, full automation evades the industry. Part of that is because it’s hard for humans to specify what they want out of data upfront. If you could actually say, “Here’s precisely what I’d like you to do with these 700 tables,” and follow up with clear goals, maybe an algorithm could do the task for you. But that’s not actually what happens. Instead, people see 700 tables, wonder what’s in there and start poking around. Only after a lot of poking do they have any clue what they might want to happen to those tables.

The poking around remains creative work because the space of ways to use the data is just so big and the metrics of what success looks like are so varied. You can’t just give the data to optimization algorithms to find the best choice of outcome.

Rather than waiting for full automation from AI, humans should get as much help as they can from AI, but actually retain some agency and identify what is or isn’t useful, then steer the next steps in a certain direction. That requires visualization and a bunch of feedback from the AI.

Understanding the impact of data and controlling data spread

One place AI has really shined, though, is in content recommendation. It turns out that computers are frighteningly effective at targeting and disseminating content. And oh boy, did we underestimate the incentives and impacts around that aspect of data and AI.

Back then, the ethical concerns we had around data and its uses in AI were mostly around privacy. I remember big debates about whether the public library should have digital records of the books you reserve. Similarly, there were controversies over grocery loyalty card programs. Shoppers didn’t want grocery chains to keep track of what food they bought when and target them for accompanying items.

That mentality has largely changed. Today, teenagers share more radically more personal information on social media than the brand of food they purchase.

While I wouldn’t say that digital privacy is in a good state, it is arguably not the worst of our data problems today. There are issues such as state-funded actors trying to introduce mayhem into our social discourse — using data. Twenty years ago, very few people saw this stuff coming our way. I don’t think there was a great sense of the ethical questions of what could go wrong.

This leads to what’s next, and even currently in process, in the evolution of our uses of data. What becomes the role of governments and of well-meaning legislation? Without predicting all the ways tools will be used, it’s hard to know how to govern and restrict them intelligently. Today, we are in a state where it seems like we need to figure out the controls or incentives around data and the way it is promulgated, but the tech is shifting faster than society is able to figure out risks and protections. It’s unsettling, to say the least.

So, were the predictions spot-on?

As a professor, I’d award it a passing grade, but not an A. There is substantially more data available to us with more uses than we probably ever could have imagined. That’s led to incredible advances in AI and machine learning along with analytics, but on many tasks, we’re still just scratching the surface, while on others we’re reaping the whirlwind. I am fascinated to see what the next 10 to 20 years will bring and look back on these issues again.

More TechCrunch

After leading the social media landscape, TikTok appears to be interested in challenging Google’s dominance in search. The company confirmed to TechCrunch that it’s testing the ability for users to…

TikTok comes for Google as it quietly rolls out image search capabilities in TikTok Shop

General Motors is investing $850 million into Cruise as the autonomous vehicle subsidiary slowly makes its way back to testing in Phoenix, Dallas and, as of Tuesday, Houston. GM’s CFO…

GM gives Cruise $850M lifeline as it relaunches robotaxis in Houston

These messaging features, announced at WWDC 2024, will have a significant impact on how people communicate every day.

At last, Apple’s Messages app will support RCS and scheduling texts

Welcome to TechCrunch Fintech! This week, we’re looking at Rippling’s controversial decision to ban some former employees from selling their stock, Carta’s massive valuation drop, a GenZ-focused fintech raise, and…

Rippling’s tender offer decision draws mixed — and strong — reactions

Google is finally making its Gemini Nano AI model available to Pixel 8 and 8a users after teasing it in March.

Google’s June Pixel feature drop brings Gemini Nano AI model to Pixel 8 and 8a users

At WWDC 2024, Apple introduced new options for developers to promote their apps and earn more from them in the App Store.

Apple adds win-back subscription offers and improved search suggestions to the App Store

iOS 18 will be available in the fall as a free software update.

Here are all the devices compatible with iOS 18

The acquisition comes as BeReal was struggling to grow its user base and was looking for a buyer.

BeReal is being acquired by mobile apps and games company Voodoo for €500M

Unlike Light’s older phones, the Light III sports a larger OLED display and an NFC chip to make way for future payment tools, as well as a camera.

Light introduces its latest minimalist phone, now with an OLED screen but still no addictive apps

Since April, a hacker with a history of selling stolen data has claimed a data breach of billions of records — impacting at least 300 million people — from a…

The mystery of an alleged data broker’s data breach

Diversity Spotlight is a feature on Crunchbase that lets companies add tags to their profiles to label themselves.

Crunchbase expands its diversity-tracking feature to Europe

Thanks to Apple’s newfound — and heavy — investment in generative AI tech, the company had loads to showcase on the AI front, from an upgraded Siri to AI-generated emoji.

The top AI features Apple announced at WWDC 2024

A Finnish startup called Flow Computing is making one of the wildest claims ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its…

Flow claims it can 100x any CPU’s power with its companion chip and some elbow grease

Five years ago, Day One Ventures had $11 million under management, and Bucher and her team have grown that to just over $450 million.

The VC queen of portfolio PR, Masha Bucher, has raised her largest fund yet: $150M

Particle announced it has partnered with news organization Reuters to collaborate on new business models and experiments in monetization.

AI news reader Particle adds publishing partners and $10.9M in new funding

The TechCrunch team runs down all of the biggest news from the Apple WWDC 2024 keynote in an easy-to-skim digest.

Here’s everything Apple announced at the WWDC 2024 keynote, including Apple Intelligence, Siri makeover

Mistral AI has closed its much-rumored Series B funding round, raising €600 million (around $640 million) in a mix of equity and debt.

Paris-based AI startup Mistral AI raises $640M

Cognigy is helping create AI that can handle the highly repetitive, rote processes center workers face daily.

Cognigy lands cash to grow its contact center automation business

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

Featured Article

Raspberry Pi is now a public company

Raspberry Pi priced its IPO on the London Stock Exchange on Tuesday morning at £2.80 per share, valuing it at £542 million, or $690 million at today’s exchange rate.

8 hours ago
Raspberry Pi is now a public company

Hello and welcome back to TechCrunch Space. What a week! In the same seven-day period, we watched Boeing’s Starliner launch astronauts to space for the first time, and then we…

TechCrunch Space: A week that will go down in history

Elon Musk’s posts seem to misunderstand the relationship Apple announced with OpenAI at WWDC 2024.

Elon Musk threatens to ban Apple devices from his companies over Apple’s ChatGPT integrations

“We’re looking forward to doing integrations with other models, including Google Gemini, for instance, in the future,” Federighi said during WWDC 2024.

Apple confirms plans to work with Google’s Gemini ‘in the future’

When Urvashi Barooah applied to MBA programs in 2015, she focused her applications around her dream of becoming a venture capitalist. She got rejected from every school, and was told…

How Urvashi Barooah broke into venture after everyone told her she couldn’t

Slack CEO Denise Dresser is speaking at TechCrunch Disrupt 2024.

Slack CEO Denise Dresser is coming to TechCrunch Disrupt this October

Apple kicked off its weeklong Worldwide Developers Conference (WWDC 2024) event today with the customary keynote at 1 p.m. ET/10 a.m. PT. The presentation focused on the company’s software offerings…

Watch the Apple Intelligence reveal, and the rest of WWDC 2024 right here

Apple’s SDKs (software development kits) have been updated with a variety of new APIs and frameworks.

Apple brings its GenAI ‘Apple Intelligence’ to developers, will let Siri control apps

Older iPhones or iPhone 15 users won’t be able to use these features.

Apple Intelligence features will be available on iPhone 15 Pro and devices with M1 or newer chips

Soon, Siri will be able to tap ChatGPT for “expertise” where it might be helpful, Apple says.

Apple brings ChatGPT to its apps, including Siri

Apple Intelligence will have an understanding of who you’re talking with in a messaging conversation.

Apple debuts AI-generated … Bitmoji