Smaller, open-source AI large language models offer enterprises the chance to leverage their own data assets for innovation and competitive advantage. Credit: Gorodenkoff / Shutterstock For the last 30 years, the dream of being able to collect, manage and make use of the collected knowledge assets of an organization has never been truly realized. Systems for sharing information assets across the enterprise have evolved in their sophistication but haven’t been able to take it to the next level by effectively turning the information that resides in digital files into usable knowledge. Data exists in ever larger silos, but real knowledge still resides in employees. But the rise of large language models (LLMs) is starting to make true knowledge management (KM) a reality. These models can extract meaning from digital data at scale and speed beyond the capabilities of human analysts. The 2023 State of the CIO survey reveals that 71% of CIO respondents anticipate greater involvement in business strategy over the next three years with 85% stating they were becoming more digital- and innovation-focused. The application of LLMs to an organization’s knowledge assets has the potential to accelerate these trends. Less is More OpenAI’s ChatGPT and Dall-E 2 generative AI (GenAI) models have revolutionized how we think about AI and what it can do. From writing poems to creating images, it’s staggering how a computer can create new content from a few simple prompts. However, the scale of the LLMs used to perform these tasks is vast and expensive for OpenAI to offer. GPT-4 was trained on over 45 terabytes of text data via more than a thousand GPUs over 34 days and cost almost $5 million in compute power. In 2022, OpenAI lost $540 million despite having raised $11.3 billion in funding rounds. Clearly these costs and its scale of operations are beyond the capabilities of most organizations wanting to develop their own LLMs. However, the AI future for many enterprises lies in building and adapting much smaller models based on their own internal data assets. Rather than relying on APIs provided by firms such as OpenAI and the risks of uploading potentially sensitive data to third-party servers, new approaches are allowing firms to bring smaller LLMs inhouse. Adjusting the parameters of LLM models and new languages such as Mojo and AI-programming frameworks like PyTorch significantly reduce the compute resources and time needed to run AI programs. Open is Better Just as the web is built on open-source software and protocols, it’s likely that many enterprise AI initiatives will be built on open-source models such as LLaMA and freely available techniques such as LoRa. According to a recently leaked Google memo, “The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.” These barriers to entry will only become lower and the results better, allowing startups and enterprises to build niche models focused on the specific needs of businesses and workflows. From GenAI to SynthAI Central to these developments is the move from AI systems that create new content based on simple prompts to models that are trained on an enterprise’s internal data and programmed to generate usable insights and recommendations. LLMs such as ChatGPT produce often believable results, but it’s not clear how the data fed into the model was used and whether the answers it gives are true or hallucinations. The recent case of a New York lawyer using ChatGPT to generate court filings and listing presumably historical cases to support his client’s claim showed the dangers of relying on GenAI outputs: despite looking like genuine evidence, six of the cases listed never took place. Silicon Valley venture capital firm A16Z recently outlined its belief that the future for AI in the workplace was not necessarily LLMs like ChatGPT, but more focused models designed to address specific business needs. They refer to this as SynthAI where the models are trained on proprietary data sets and optimized for discrete purposes, such as resolving customer support issues, summarizing market research results and creating personalized marketing emails. Applying the SynthAI approach to better managing a firm’s data assets is a natural evolution of this next stage in the AI revolution. Consulting firm BCG has adopted this approach to 50 years worth of their archives, largely reports, presentations and data collected from surveys and client engagements. Previously, employees could only search these files via a keyword search and then read through each document to check its relevance. Now the system provides usable answers to questions. The knowledge management dream is becoming a reality. Related content brandpost Sponsored by Rockwell Automation 6 steps the manufacturer of Arm & Hammer and OxiClean took to harden OT cybersecurity Church & Dwight turned to Rockwell Automation to help the maker of well-known personal and household care products reduce the risk of manufacturing interruptions. By Maro Eremyan Apr 30, 2024 4 mins Security news AI is set to transform hiring requirements: Report About 50% of technology leaders in an EY survey said they anticipate a combination of layoffs and hiring in the next six months as a direct result of AI adoption. By Gyana Swain Apr 30, 2024 4 mins Hiring Artificial Intelligence news Tableau further democratizes analytics with AI-fueled features New AI and gen AI additions to Tableau Pulse and Einstein Copilot for Tableau seek to help business users and ‘novice’ data analysts gain expert insights with ease. By Thor Olavsrud Apr 30, 2024 5 mins Generative AI Business Intelligence Data Visualization opinion CIOs in transition: 5 tips for landing your next IT leadership job Every IT leader will experience being out of work at some point in their career. How you approach this transitionary period will have a profound impact on the trajectory of your career. By Isaac Sacolick Apr 30, 2024 10 mins CIO IT Skills Careers PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe