Recently, I had the pleasure of participating on two panels at Uphill Conference. Here's what I learned:
Nobody has THE killer Product for LLMs yet
Obviously, we're not talking about products like chatGPT or Copilot here â depending on your job, these products provide a huge added value. However, if you look at the market at large, there doesn't seem to be a killer product out there (yet?). In a recent survey, Andreessen Horowitz found that the fastest growing category are companion apps. How sustainable the growth of these categories will be? The jury is still out on that.
So the 3rd AI-Winter is just around the corner? Not necessarily:
LLMs are incredibly useful â just not how you'd think
LLMs came into the mainstream with chatGPT. And this UI-Pattern has influenced our collective thinking. This general assistant pattern does have its merits. But it also falls severely short. Broadly speaking, it all boils down to information retrieval and processing. The current way all of these general assistant work is shaped by the profit-driven nature of the companies building them.
However, that's (usually) not the best approach to make a Product actually user friendly. If you're interested in learning more about how search COULD be, I recommend this paper.
So if not general assistants - what then? It's a mixed bag:
- Upstream tasks such as generating synthetic data to train models on specific tasks, bootstrapping your evalsets etc.
- Search: The Embeddings are an extremely powerful way to search, and can be implemented with relatively little effort, you can read more about that here.
- As «Wireframes» for different things, be it images, short movies or text.
- LLMs can generate alt attributes, you can read more about that here.
- RAG-Applications, you can read more about that here, here and here.
What's true in «classic» softwareâengineering is also true in ML/AI softwareâEngineering
Simplicity is king. The simpler your architecture is, the more resilient your application will be. I'm not advocating that you understand your foundational model in all it's details or that you design your own. I'm advocating that you create an architecture which focusses on simplicity rather than anything else, an architecture with clear boundaries between different parts and using the right technology for the right job. (Don't believe me? That's how we flew to the moon.)
There's a nice benefit to this. If you only use LLM / ML where you absolutely need, you get a more resilient application. But you also save quite a lot on energy. Obviously, for a single customer the costs, even of the most advanced models, are not that high â but every little bit counts. You find more thoughts about sustainability here.
It's all about ML-OPs
Even on the Panels geared towards products, the discussion kept returning to ML-OPs. Partly that's to blame on the audience. Nevertheless, I'd say there is also a deeper reason behind. Most of the engineers outside specialised departments / companies have little to no contact with ML, and (at least in my day-to-day) even less of the business side have any idea about ML.
Consequently, the discussion turns towards trust. However, if you try to apply «if this then that» thought patterns to any application powered at least partly by ML or an LLM, you'll be in a world of hurt. Either you find yourself speaking to each other without actually talking. Or you improve one aspect and make worse another one.
So any application needs a robust Pipeline, with not only tests for the «classic» part but also the ML one. That's doable, but it needs thought and it requires clever engineering.