TechieTricks.com
Stardog Karaoke offers on-premises zero hallucination LLM service Stardog Karaoke offers on-premises zero hallucination LLM service
Discover how companies are responsibly integrating AI in production. This invite-only event in SF will explore the intersection of technology and business. Find out... Stardog Karaoke offers on-premises zero hallucination LLM service


Discover how companies are responsibly integrating AI in production. This invite-only event in SF will explore the intersection of technology and business. Find out how you can attend here.


Enterprise data management and knowledge graph company Stardog, headquartered in Arlington, Virginia, has been ahead of the curve since its start in 2006: even back then, founder and CEO Kendall Clark knew 21st century businesses would be defined by their data — and that digitizing it and making it accessible instantly, on demand, was going to be a huge market.

The company has raised $40 million in funding to date and counts US government agencies including NASA and the Department of Defense, as well as large enterprises Raytheon and Bosch, among its customers. And as the generative AI era has kicked into high gear, Stardog’s services have only come into greater demand.

A new device for a new era

Now, the company is launching “Karaoke,” an innovative on-premises server designed with partners Nvidia and Supermicro that hosts Stardog’s Voicebox Large Language Model (LLM) platform, a custom enterprise-grade fine-tuned Llama 2 variant publicly unveiled in October 2023 that allows users without any technical training to type natural language queries into Stardog Cloud on their computers and have their questions answered with their own company’s structured data.

“We’re talking about a big bank, a manufacturer, a pharmaceutical, who is regulated and therefore can’t easily, or maybe can’t ever, move all their data to the cloud,” said Clark in a voice call interview with VentureBeat. “All these businesses need Gen AI, but most Gen AI is in the cloud. Karaoke is designed to step into that gap and effectively bring the cloud to you, put it adjacent to your data, and then give you the benefits of this democratized, self-serve data access.”

VB Event

The AI Impact Tour – San Francisco

Join us as we navigate the complexities of responsibly integrating AI in business at the next stop of VB’s AI Impact Tour in San Francisco. Don’t miss out on the chance to gain insights from industry experts, network with like-minded innovators, and explore the future of GenAI with customer experiences and optimize business processes.


Request an invite

Karaoke is available in multiple sizes and configurations ranging from Micro — a single “pizza box” size server — to “Enterprise” grade with 2304 CPU cores, and a variety of sizes in between. The smallest can support 500 concurrent users compared to 20,000 with the largest model.

LLM as data science translator

The Voicebox LLM layer is designed to act effectively as a translator, taking a user’s question — say, in the case of a government agency “what trade on the Euro last quarter violated sanctions” — and translating it into the language of data science and programming, a relevant JavaScript, Python or SQL query, retrieving the information from the company’s Stardog Knowledge Graph.

How to avoid hallucinations? Don’t show LLM outputs

Furthermore, by leveraging Stardog’s advanced knowledge graph and the newly developed Safety RAG (Rapid Accuracy Guarantee) design pattern, Voicebox promises a 100% hallucination-free experience. Clark told VentureBeat thanks to this implementation, enterprise customers no longer have to compromise on accuracy for the sake of advanced technology.

“It’s pretty easy to build a hallucination-free AI system,” he said. “You just have to do this one thing: never show the user anything that comes solely from the large language model. We never show an end user a fact that comes from the large language model. We only show them facts that come from their data right now.”

Instead of relying on the LLM to take a company’s data and summarize it, or even retrieve it, the Voicebox LLM layer simply converts the user’s natural language queries back into more programmatic ones that a trained data scientist would construct.

The “answer” the user received is not an LLM’s response (token prediction). Instead, it is simply the what comes up when the translated, programmatic version of the query accesses the company’s database. If the LLM layer doesn’t understand the user’s query or what data would help answer them, it simply tells them “I don’t know how to answer your question.”

In addition, Voicebox provides a digital trail to the user of where it retrieved the information, complete with citations and links.

As Clark explained it, it shows “the answer to your question, a link where you can click to open the source. Our platform has traceability and lineage, so you can go verify for yourself that these that these answers to drive out even the possibility of having the doubt or not trusting the data.”

Pricing and availability

According to Clark, Stardog plans to offer its Voicebox LLM layer at $39 per user per month, and it can be used in clouds or virtual private clouds without the Karaoke box.

The Karaoke box pricing depends on the number of users and hardware based on the sizes above, but Clark said it was leased to customers on a 3-to-5 year timeframe.

For enterprises in regulated industries looking to harness the capabilities of generative AI while adhering to compliance mandates, Stardog’s Karaoke presents a possible solution.

As these tools become increasingly integral to business operations across various sectors, Stardog is well-positioned to lead the charge in the safe and effective deployment of on-premises GenAI applications. For more details on Stardog Karaoke and Voicebox, interested parties can visit Stardog’s website at www.stardog.com.

With Karaoke, Stardog not only addresses a significant market need but also sets a new standard for the deployment of secure, effective, and compliant AI technologies in the enterprise realm.



Source link

techietr