Hacker News new | past | comments | ask | show | jobs | submit login
Semantic Kernel (github.com/microsoft)
98 points by overbytecode on Nov 28, 2023 | hide | past | favorite | 13 comments



I'm surprised by how uncommon SK is compared to LangChain. Microsoft is very active in the space and has a few other related LLM frameworks of different nature - Semantic Memory and Guidance.


Semantic Memory (renamed to Kernel Memory - https://github.com/microsoft/kernel-memory) complements SK. Guidance's features are being absorbed into SK, following the departure of that team from Microsoft. Additionally, we have TypeChat (https://github.com/microsoft/TypeChat), which aims to ensure type-safe responses from LLMs. Most features of Autogen are also being integrated into SK, along with Assistants. SK serves as the orchestration engine powering Microsoft Copilots.


Tell me more about how SK is the orchestration engine for Microsoft Copilots? Which ones specifically?


They've also got Autogen and Promptflow, seems like Microsoft just has random teams doing their own thing with lots of overlap


And all are of dubious quality as well


We’ve been building the Langroid[1] Multi-Agent LLM framework, starting several months before AutoGen. Langroid has an elegant Inter-agent orchestration mechanism[2], among many other things. We’ve taken a measured approach to avoid bloat and excess abstractions (unlike that other framework that I won’t mention :) )

[1] https://github.com/langroid/langroid

From the README:

Langroid is an intuitive, lightweight, extensible and principled Python framework to easily build LLM-powered applications, from ex-CMU and UW-Madison researchers. You set up Agents, equip them with optional components (LLM, vector-store and tools/functions), assign them tasks, and have them collaboratively solve a problem by exchanging messages.

[2] Docs on Task Delegation https://langroid.github.io/langroid/quick-start/multi-agent-...


Seems nice if you're using c# or java. It also supports python, but for that Simon's llm library is nice because he designed it as both a library and a command line tool: https://github.com/simonw/llm


Well you see, Langchain rolls of the tongue better so it has more broad appeal despite being a pile of crap.


I'm using Semantic Kernel to build more complex AI systems. The technology is still maturing and you deal with things like missing abstractions and the like. The planners are intriguing and promising, and they'll be more manageable once assistants are available to make things more modular. Still, it's the best thing I've seen for building maintainable generative AI solutions.


What application are you using SK for? I'm struggling to come up with good use-cases in my work.


We are .NET shop (server side at least), so the opportunity to find something that works with .NET was a huge plus for us. I heard similar comments made by Java developers, in regards to SK for Java.

I have been following Semantic Kernel since April and I have been focused on Kernel Memory recently.

Having to learn LLMs, how they work, prompting, vector databases, RAG, tokens, chunking, etc. is hard enough: if we had had to switch to Python entirely it would have been too much to handle.

In other words: while probably everything we need is out there (and oftentimes in more mature forms), we cannot feel comfortable by also adopting an entire new way of doing basic things like microservices, having a REST/GraphQL interface, and to re-learn how to write code and which frameworks to use...

We will use Pyhton, too, and the fact we can have C# and Pyhton in pipelines will allow us to scale our development team with people who are familiar with Python. So once again, a big plus for a small .NET shop. That was the primary reason SK was picked.

The fact SK is highly modular (i.e. it allows to switch models and model providers, including self-hosted ones, it allows for custom connectors, such as the one to Elasticsearch we are building) was the second reason.

---

So, at the beginning (up to August) we made sure all the things we needed from SK work. They did, for the most part.

We can use custom-embeddings/models from HuggingFace, we have access to LLama and then both OpenAi and the Azure OpenAI right now.

My biggest concern was related to pre-filtering large quantities of data, before starting to do semantic searches. The approach SK used was a bit too restrictive and over simplified.

So I started looking into solutions, and now I found out that what I intended to do (filtering wise) is very similar to what the KM team is doing already, so I was happy to see that my problems were being addressed somehow, and we adapted accordingly.

I am trying to see how well KM helps us save reinventing the wheel, and so far I am optimistic. The fact I have access to the KM code and can easily debug it in C# makes us confident we can find ways to get around (or fix) issues and see what it's truly doing.

Finally, the Semantic Team is incredibly active and responsive. They host weekly Teams meetings with the community and they are very dedicated to make this work. The amount of blog posts and YT videos they produce is quite large.


Gone are the days of seeing "kernel" in a title and getting juicy OS development content :)

old fart signs out


the days of coding are going to go :-) from assembly to javascript to AI... where no man has gone before




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: