Safe, Understandable, and Controlled Artificial Intelligence.

For real companies, real solutions, available today.
Danger Zone

Don't Trust the LLaMas

If we continue to put all of our effort into Large Language Models we may eventually reach AGI — and that's the worst case scenario.

LLMs (Llamas) are trained, they aren't programmed. They learn by seeing huge amounts of data. They are trained on the entirety of the internet, all of the best, worst, mundane and interesting content humanity has created.

LLMs reflect humanity. They echo our best and our worst. 

There is another way...

Seriously?

Let's get serious. LLMs are amazing. They are a true breakthrough in Natural Language Understanding.
But, when all you have is an LLM everything looks like a prompt engineering problem.
Use LLMs. But use them responsibly. Use protection.
A surreal and humorous illustration of an oblivious llama, adorned with a flower in its ear, sitting in a high-tech server room. The llama holds a wrench, seemingly responsible for operating or fixing the equipment. In the background, a massive nuclear explosion creates a mushroom cloud over a distant cityscape, suggesting that the llama’s actions might have inadvertently caused the catastrophe. The juxtaposition of the llama's calm demeanor with the apocalyptic scene adds a darkly comedic tone.
About Us

Who Are We? 

We're a group of passionate engineers, based in Orlando, Florida. We believe strongly that the future of AI should not be decided by Silicon Valley – and we're doing something about it!
We believe that the best way to predict the future is to create it
AI should be inherently safe, understandable, and controlled.
Our Solution

The Software Stack

Our software stack enables hallucination-free Artificial Intelligence, securely. An AI Safety Layer that separates private local data from LLMs.

Our software stack keeps your company safe from the LLMs. And we do that by separating the problem:
LLMs are great at interpreting natural language
OGAR (Ontology Guided Agentic Retrieval) allows safe access to data in ways that LLMs cannot.
The result: Hallucination free interaction with data extraction that stays within your software environment.
AI Agents
Voice Agents
Automation
Applications
Feeding Frenzy CRM
Feeding Frenzy Call Center
OGAR Enterprise
Intelligence Factory
Technology Stack
Extracts entities, facts,
artifacts, elements
Converts entities, facts, artifacts,
and elements into code
Development platform on which natural language
can be used to interface with extracted data
Data Sources
Documents
PDFs
Spreadsheets
An icon of a light blue filing cabinet with two drawers, outlined in black. The top drawer is slightly open, revealing a yellow file folder inside. The design is simple and cartoon-like, conveying a concept of organization or document storage.
Records
Logos of two popular database management systems. On the left is the Microsoft SQL Server logo, featuring a red and white stylized sail-like icon with the text 'Microsoft SQL Server' below it in black. On the right is the Oracle Database logo, with the word 'ORACLE' in bold red letters above a thin gray line, and the word 'DATABASE' in bold black letters below the line
Legacy SQL Databases

Our Technology Stack Explained

AI Agents
Voice Agents
We currently employ Voice Agents with Feeding Frenzy. Those agents are able to interface with customers and generate actionable outcomes for sales and customer support.
Applications
Feeding Frenzy CRM
A next generation CRM built with AI to optimize efficiency, conversions, and costs. Allows custom AI Agents. Integrates with Voice, SMS, and Email.
Feeding Frenzy Call Center
Supercharged calling software. Built in transcriptions, sentiment analysis, entity extraction, and semantic search. Integrates with Twilio, Freedom Voice, and more.
OGAR Enterprise
OGAR Enterprise transforms enterprise data retrieval with advanced ontology-driven AI, offering privacy, scalability, and precision in every search.
The Technology Stack
SemDB
SemDB goes beyond search, enabling you to act on information from documents, emails, audio, and more.
Buffaly
Buffaly integrates Large Language Models (LLMs) with software systems to create safe AI Agents.
OGAR.ai
OGAR (Ontology Guided Augmented Retrieval) is a new hybrid technology, developed by Intelligence Factory, incorporating the best that LLMs, Graph Based Approaches, and Traditional Programming have to offer.

Introducing OGAR: Ontology-Guided Augmented Retrieval

The OGAR white paper explores controlling LLMs in real-world use. Buffaly, with OGAR, offers secure, industry-specific insights and a controllable AI solution through its ontology and ProtoScript.

Key concepts covered:
  • Challenges in controlling LLM behavior and minimizing risks like bias and inaccuracy.
  • Bridging the gap between language understanding and real-world actions.
  • Buffaly’s ontology and ProtoScript enabling transparent and executable AI-driven processes.
  • Buffaly as an abstraction layer separating language interpretation from functional execution.

Get the OGAR White Paper

Thank you! Your submission has been received! We will reach out to you asap!
Download the White Paper:
Download now
Oops! Something went wrong while submitting the form.

Contact us

Want to see if we are a good fit? Have a great idea but lack the tools and knowledge to implement it? Contact us and we'll help you!
A bold purple line art icon depicting a hand holding up a trophy, symbolizing achievement or success. The design is complemented by sparkling stars around the trophy, adding a celebratory and victorious tone.
Thank you! Your submission has been received! We will reach out to you asap!
Oops! Something went wrong while submitting the form.

Recent Updates

From Black Boxes to Clarity: Buffaly's Transparent AI Framework

Matt Furnari
11/27/2024

The rapid advancement of Large Language Models (LLMs) has brought remarkable progress in natural language processing, empowering AI systems to understand and generate text with unprecedented fluency. Yet, these systems face a critical limitation: while they excel at processing language, they struggle to execute concrete actions or provide actionable insights grounded in real-world scenarios. This gap between language comprehension and practical execution is a fundamental challenge in AI development.

Enter Buffaly, powered by the groundbreaking Ontology-Guided Augmented Retrieval (OGAR) framework. Buffaly redefines how AI systems access, analyze, and act upon data by combining the structured clarity of ontologies with the dynamic reasoning capabilities of modern LLMs.

Why Buffaly Matters

Traditional LLMs operate as black boxes, generating outputs based on statistical patterns from vast datasets. While powerful, these systems often fall short when required to:
  • Handle complex reasoning.
  • Integrate structured and unstructured data sources.
  • Execute actions grounded in real-world contexts.
Buffaly addresses these limitations by introducing ontology-based AI, which brings structure, control, and transparency to AI systems. With Buffaly, organizations can seamlessly bridge the divide between language understanding and action execution, unlocking new possibilities in fields like healthcare, finance, and aerospace.

How Buffaly Works

Buffaly’s OGAR framework is built around three core innovations:
  • Structured Ontologies
    Buffaly uses ontologies — graph-based representations of knowledge — to define concepts, relationships, and rules in a precise and transparent manner. This structure provides a foundation for reasoning and decision-making, enabling Buffaly to interpret and act on complex queries with clarity and accuracy.
  • ProtoScript
    At the heart of Buffaly lies ProtoScript, a C#-based scripting language designed to create and manipulate ontologies programmatically. ProtoScript allows developers to map natural language inputs into structured actions, bridging the gap between language and execution effortlessly.
    ou might decide to keep using the old embeddings to save on costs. But over time, you miss out on improvements and possibly pay more for less efficient models.
  • Dual Learning Modes
    Buffaly handles both structured data (e.g., database schemas) and unstructured data (e.g., emails, PDFs) with equal ease. This dual capability allows Buffaly to populate its knowledge base dynamically, learning incrementally without the need for costly retraining.
    se new embeddings for new documents and keep the old ones for existing data. But now your database is fragmented, and searching across different embedding spaces gets complicated.

What Sets Buffaly Apart?

Unlike traditional AI solutions, Buffaly integrates:
  • Actionability: Translates language into executable actions for real-world systems.
  • Dynamic Reasoning: Combines LLM insights with ontology-driven logic for advanced decision-making.
  • Industry-Specific Applications: Tailors solutions for sensitive fields, ensuring secure, domain-specific results.
By serving as both a semantic and operational bridge, Buffaly creates a transparent interface that not only interprets language but also understands its implications and executes relevant actions.

A Glimpse Into the Future

The integration of Buffaly’s structured ontology with the power of LLMs represents a paradigm shift in AI. It paves the way for systems that are not only capable of understanding human language but also of acting on it with precision and accountability. Over the next series of blog posts, we’ll explore Buffaly’s unique features, diving deeper into its transformative potential and how it is shaping the future of AI applications.

Are you ready to see what’s next? Stay tuned as we unravel the layers of Buffaly’s OGAR framework and its implications for AI innovation!
If you want to learn more about the OGAR framework, download the OGAR White Paper at OGAR.ai.

Bridging the Gap Between Language and Action: How Buffaly is Revolutionizing AI

Matt Furnari
11/26/2024

The rapid advancement of Large Language Models (LLMs) has brought remarkable progress in natural language processing, empowering AI systems to understand and generate text with unprecedented fluency. Yet, these systems face a critical limitation: while they excel at processing language, they struggle to execute concrete actions or provide actionable insights grounded in real-world scenarios. This gap between language comprehension and practical execution is a fundamental challenge in AI development.

Enter Buffaly, powered by the groundbreaking Ontology-Guided Augmented Retrieval (OGAR) framework. Buffaly redefines how AI systems access, analyze, and act upon data by combining the structured clarity of ontologies with the dynamic reasoning capabilities of modern LLMs.

Why Buffaly Matters

Traditional LLMs operate as black boxes, generating outputs based on statistical patterns from vast datasets. While powerful, these systems often fall short when required to:
  • Handle complex reasoning.
  • Integrate structured and unstructured data sources.
  • Execute actions grounded in real-world contexts.
Buffaly addresses these limitations by introducing ontology-based AI, which brings structure, control, and transparency to AI systems. With Buffaly, organizations can seamlessly bridge the divide between language understanding and action execution, unlocking new possibilities in fields like healthcare, finance, and aerospace.

How Buffaly Works

Buffaly’s OGAR framework is built around three core innovations:
  • Structured Ontologies
    Buffaly uses ontologies — graph-based representations of knowledge — to define concepts, relationships, and rules in a precise and transparent manner. This structure provides a foundation for reasoning and decision-making, enabling Buffaly to interpret and act on complex queries with clarity and accuracy.
  • ProtoScript
    At the heart of Buffaly lies ProtoScript, a C#-based scripting language designed to create and manipulate ontologies programmatically. ProtoScript allows developers to map natural language inputs into structured actions, bridging the gap between language and execution effortlessly.
    ou might decide to keep using the old embeddings to save on costs. But over time, you miss out on improvements and possibly pay more for less efficient models.
  • Dual Learning Modes
    Buffaly handles both structured data (e.g., database schemas) and unstructured data (e.g., emails, PDFs) with equal ease. This dual capability allows Buffaly to populate its knowledge base dynamically, learning incrementally without the need for costly retraining.
    se new embeddings for new documents and keep the old ones for existing data. But now your database is fragmented, and searching across different embedding spaces gets complicated.

What Sets Buffaly Apart?

Unlike traditional AI solutions, Buffaly integrates:
  • Actionability: Translates language into executable actions for real-world systems.
  • Dynamic Reasoning: Combines LLM insights with ontology-driven logic for advanced decision-making.
  • Industry-Specific Applications: Tailors solutions for sensitive fields, ensuring secure, domain-specific results.
By serving as both a semantic and operational bridge, Buffaly creates a transparent interface that not only interprets language but also understands its implications and executes relevant actions.

A Glimpse Into the Future

The integration of Buffaly’s structured ontology with the power of LLMs represents a paradigm shift in AI. It paves the way for systems that are not only capable of understanding human language but also of acting on it with precision and accountability. Over the next series of blog posts, we’ll explore Buffaly’s unique features, diving deeper into its transformative potential and how it is shaping the future of AI applications.

Are you ready to see what’s next? Stay tuned as we unravel the layers of Buffaly’s OGAR framework and its implications for AI innovation!
If you want to learn more about the OGAR framework, download the OGAR White Paper at OGAR.ai.

When Retrieval Augmented Generation (RAG) Fails

Matt Furnari
11/25/2024

Retrieval Augmented Generation (RAG) sounds like a dream come true for anyone working with AI language models. The idea is simple: enhance models like ChatGPT with external data so they can provide answers based on information beyond their original training. Need your AI to answer questions about your company's internal documents or recent events not covered in its training data? RAG seems like the perfect solution.

But when we roll up our sleeves and implement RAG in the real world, things get messy. Let's dive into why RAG isn't always the magic fix we hope for and explore the hurdles that can trip us up along the way.

The Allure of RAG

At its heart, RAG is about bridging gaps in an AI's knowledge:
  • Compute Embeddings: Break down your documents into chunks and convert them into embeddings—numerical representations that capture the essence of the text.
  • Store and Retrieve: Keep these embeddings in a database. When a question comes in, find the chunks whose embeddings are most similar to the question.
  • Augment the AI: Feed these relevant chunks to the AI alongside the question, giving it the context it needs to generate an informed answer.
In theory, this means your AI can tap into any knowledge source you provide, even if that information isn't part of its original training.

The Reality Check

Despite its promise, implementing RAG isn't all smooth sailing. Here are some of the bumps you might hit on the road.

1. The Ever-Changing Embeddings

Embeddings are the foundation of RAG—they're how we represent text in a way that the AI can understand and compare. But here's the catch: embedding models keep evolving. New models offer better performance, but they come with their own embeddings that aren't compatible with the old ones.

So, you're faced with a dilemma:
  • Recompute All Embeddings: Every time a new model comes out, you could reprocess your entire document library to generate new embeddings. But if you're dealing with millions or billions of chunks, that's a hefty computational bill.
  • Stick with the Old Model: You might decide to keep using the old embeddings to save on costs. But over time, you miss out on improvements and possibly pay more for less efficient models.
  • Mix and Match: Use new embeddings for new documents and keep the old ones for existing data. But now your database is fragmented, and searching across different embedding spaces gets complicated.
There's no perfect solution. Some platforms, like SemDB.ai, try to ease the pain by allowing multiple embeddings in the same database, but the underlying challenge remains.

2. The Pronoun Problem

Language is messy. People use pronouns, references, and context that computers struggle with. Let's look at an example:
Original Text: "Chocolate cookies are made from the finest imported cocoa. They sell for $4 a dozen."
When we break this text into chunks for embeddings, we might get:
Chunk 1: "Chocolate cookies are made from the finest imported cocoa."
Chunk 2: "They sell for $4 a dozen."
Now, if someone asks, "How much do chocolate cookies cost?", the system searches for embeddings similar to the question. But Chunk 2 doesn't mention "chocolate cookies" explicitly—it uses "they." The AI might miss this chunk because the embedding doesn't match well with the question.

Solving It

One way to tackle this is by cleaning up the text before creating embeddings:
Chunk 1: "Chocolate cookies are made from the finest imported cocoa."
Chunk 2: "Chocolate cookies sell for $4 a dozen."
By replacing pronouns with the nouns they refer to, we make each chunk self-contained and easier for the AI to match with questions.

3. Navigating Domain-Specific Knowledge

Things get trickier with specialized or branded products. Imagine you have a product description like this:
"Introducing Darlings—the ultimate cookie experience that brings together the timeless flavors of vanilla and chocolate in perfect harmony... And at just $5 per dozen, indulgence has never been so affordable."
Extracting key facts:
Darlings are cookies.
Darlings combine vanilla and chocolate.
Darlings cost $5 per dozen.
Now, if someone asks, "How much are the chocolate and vanilla cookies?", they might not mention "Darlings" by name. The embeddings might prioritize more general chunks about chocolate or vanilla cookies, missing the specific info about Darlings.

4. The Limits of Knowledge Graphs

To overcome these issues, some suggest using Knowledge Graphs alongside RAG. Knowledge Graphs store information as simple relationships:
(Darlings, are, cookies)
(Darlings, cost, $5)
(Darlings, contain, chocolate and vanilla)
In theory, this structure makes it easy to retrieve specific facts. But reality isn't so tidy.

The Complexity of Real-World Information

Not all knowledge fits neatly into simple relationships. Consider:
"Bob painted the room red on Tuesday because he was feeling inspired."
Trying to capture all the nuances of this sentence in a simple graph gets complicated quickly. You need more than just triplets—you need context, causation, and temporal information.

Conflicting Information

Knowledge Graphs also struggle with contradictions or exceptions. For example:
(Richard Nixon, is a, Quaker)
(Quakers, are, pacifists)
(Richard Nixon, escalated, the Vietnam War)
Does the graph conclude that Nixon is a pacifist? Real-world logic isn't always straightforward, and AI can stumble over these nuances.

5. The Human vs. Machine Conundrum

Humans are flexible thinkers. We handle ambiguity, context, and exceptions with ease. Computers, on the other hand, need clear, structured data. When we try to force the richness of human language and knowledge into rigid formats, we lose something important.

The Database Dilemma

All these challenges highlight a broader issue: how we store and retrieve data for AI systems. Balancing the need for detailed, accurate information with the limitations of current technology isn't easy.

Embedding databases can become unwieldy as they grow. Knowledge Graphs can help organize information but may oversimplify complex concepts. We're still searching for the best way to bridge the gap between human language and machine understanding.

So, What Now?

RAG isn't a lost cause—it just isn't a one-size-fits-all solution. To make it work better, we might need to:
  • Develop Smarter Preprocessing: Clean and prepare text in ways that make it easier for AI to understand, like resolving pronouns and simplifying sentences.
  • Embrace Hybrid Approaches: Combine embeddings with other methods, like traditional search algorithms or domain-specific rules, to improve accuracy.
  • Accept Imperfection: Recognize that AI has limitations and set realistic expectations about what it can and can't do.

Final Thoughts

Retrieval Augmented Generation holds a lot of promise, but it's not a magic wand. By understanding its limitations and working to address them, we can build better AI systems that come closer to meeting our needs. It's an ongoing journey, and with each challenge, we learn more about how to bridge the gap between human knowledge and artificial intelligence.

Read More

SemDB: Solving the Challenges of Graph RAG

11/21/2024
In the beginning there was keyword search
Eventually word embeddings came along and we got Vector Databases and Retrieval Augmented...
Read more

Metagraphs and Hypergraphs with ProtoScript and Buffaly

11/20/2024
In Volodymyr Pavlyshyn's article, the concepts of Metagraphs and Hypergraphs are explored as a transformative framework for developing relational models in AI agents’ memory systems...
Read more

Chunking Strategies for Retrieval-Augmented Generation (RAG): A Deep Dive into SemDB's Approach

11/19/2024
In the ever-evolving landscape of AI and natural language processing, Retrieval-Augmented Generation (RAG) has emerged as a cornerstone technology...
Read more

Is Your AI a Toy or a Tool? Here’s How to Tell (And Why It Matters)

11/07/2024
As artificial intelligence (AI) becomes a powerful part of our daily lives, it’s amazing to see how many directions the technology is taking. From creative tools to customer service automation...
Read more

Stop Going Solo: Why Tech Founders Need a Business-Savvy Co-Founder (And How to Find Yours)

10/24/2024
Hey everyone, Justin Brochetti here, Co-founder of Intelligence Factory. We're all about building cutting-edge AI solutions, but I'm not here to talk about that today. Instead, I want to share...
Read more

Why OGAR is the Future of AI-Driven Data Retrieval

09/26/2024
When it comes to data retrieval, most organizations today are exploring AI-driven solutions like Retrieval-Augmented Generation (RAG) paired with Large Language Models (LLM)...
Read more

The AI Mirage: How Broken Systems Are Undermining the Future of Business Innovation

09/18/2024
Artificial Intelligence. Just say the words, and you can almost hear the hum of futuristic possibilities—robots making decisions, algorithms mastering productivity, and businesses leaping toward unparalleled efficiency...
Read more

A Sales Manager’s Perspective on AI: Boosting Efficiency and Saving Time

08/14/2024
As a Sales Manager, my mission is to drive revenue, nurture customer relationships, and ensure my team reaches their goals. AI has emerged as a powerful ally in this mission...
Read more

Prioritizing Patients for Clinical Monitoring Through Exploration

07/01/2024
RPM (Remote Patient Monitoring) CPT codes are a way for healthcare providers to get reimbursed for monitoring patients' health remotely using digital devices...
Read more

10X Your Outbound Sales Productivity with Intelligence Factory's AI for Twilio: A VP of Sales Perspective

06/28/2024
As VP of Sales, I'm constantly on the lookout for ways to empower my team and maximize their productivity. In today's competitive B2B landscape, every interaction counts...
Read more

Practical Application of AI in Business

06/24/2024
In the rapidly evolving tech landscape, the excitement around AI is palpable. But beyond the hype, practical application is where true value lies...
Read more

AI: What the Heck is Going On?

06/19/2024
We all grew up with movies of AI and it always seemed to be decades off. Then ChatGPT was announced and suddenly it's everywhere...
Read more

Paper Review: Compression Represents Intelligence Linearly

04/23/2024
This is post is the latest in a series where we review a recent paper and try to pull out the salient points. I will attempt to explain the premise...
Read more

SQL for JSON

04/22/2024
Everything old is new again. A few years back, the world was on fire with key-value storage systems...
Read more

Telemedicine App Ends Gender Preference Issues with AWS Powered AI

04/19/2024
AWS machine learning enhances MEDEK telemedicine solution to ease gender bias for sensitive online doctor visits...
Read more