Christopher
Stoll

Bloombuilder: Using Generative AI in Bloomfilter

In this video, I discuss AI and how Bloomfilter is incorporating it into its product. In future sessions, I will cover how the Bloomfilter team integrates AI into our SDLC and dive deep into how we implement AI in our codebase — including a short coding session to demonstrate how we actually code it. But today, I just want to talk about our product’s relationship with AI.

I started this conversation because of the cringe looks I get from some of my friends when I tell them we are building AI into our platform. There’s been so much hype around AI lately that some people in the industry — especially practitioners and implementers of technology — are becoming exhausted by it. And I understand that. Many products are simply adding AI prompt-to-content generators — often with dubious value — just to claim they have an AI-enabled platform.

At Bloomfilter, we are trying to do something a little different.

The first thing to recognize is that what the vast majority of people call AI is the use of large language models. We use large language models too, but we aren’t simply dumping our data into an LLM and hoping it makes sense of it.

At Bloomfilter, we gather a lot of information from software development systems. We commonly integrate with Jira, GitHub, and ADO, allowing us to extract a wealth of information from those platforms. The problem we — and all process mining platforms — face is the need to distill that information down for our customers.

Our platform offers various visualizations, but the software industry does not have its own equivalent of GAAP accounting. There really are no standardized measurement tools. So, every shop — driven by its current technical leadership — has its own methodology. Bloomfilter has not yet earned the right to claim that we know what is best for each shop, so we do our best to help them measure themselves on their own terms.

Updating a front end to accommodate the wide variety of measurement and reporting techniques has proven to be a challenge — especially when different customer needs conflict. We are currently in the process of redesigning our user experience to account for all that we have learned in this regard. However, we also want faster and more flexible ways of keeping up with our customers’ demands, and that is where large language models come into the picture.

As you can see, we are not immediately showing you text streaming across the screen. That’s because we aren’t just dumping information into an LLM and hoping it finds something useful. In many ways, we don’t want generative AI here; we don’t want it making up its own facts.

You will notice that I am working on my local device; that’s why debug information shows. The debug information demonstrates that we are using AI toolchains. We have an AI orchestration layer that knows about all the data we can provide. The orchestrator calls the appropriate tool or tools based on the question, and that tool fetches real data. The tool performs some basic formatting and then passes it back up the chain. The orchestrator then formats the data as requested by the user.

We are also training our own language models — models trained to better speak the language of software development. For example, we want our language models to be able to discuss story cards using well-formed Gherkin. We’ll talk more about that in a future video.

So, that is just a brief introduction to what we are doing at Bloomfilter to incorporate the latest advancements in large language models into our product.Look out for future videos with more practitioner-focused themes — how our product and technology teams are using AI in their day-to-day work, and an in-depth look at how we are coding our AI chains.

Published: 2024-08-26
BloombuilderGenerative AI