-
Notifications
You must be signed in to change notification settings - Fork 454
Closed as not planned
Labels
staleStale issue will be autoclosed soonStale issue will be autoclosed soon
Description
Description
Run embedding example KernelMemorySaveAndLoad.cs, and after the weights and context generated, I got error: LLamaSharp/LLamaSharp/ggml/src/ggml.c:2703: GGML_ASSERT(ggml_can_mul_mat(a,b)) failed
But if I run chatsession with the model, it works perfectly. So it seems something is wrong with the embedding part of kernelmemory, maybe WithLLamaSharpTextEmbeddingGeneration.
Reproduction Steps
Here is my code:
using LLama;
using LLama.Common;
using LLamaSharp.KernelMemory;
using Microsoft.KernelMemory;
using Microsoft.KernelMemory.Configuration;
using Microsoft.KernelMemory.DocumentStorage.DevTools;
using Microsoft.KernelMemory.FileSystem.DevTools;
using Microsoft.KernelMemory.MemoryStorage.DevTools;
using System.Diagnostics;
namespace LSKMRAG
{
class Program
{
static void Main(string[] args)
{
ChatQwen chat = new ChatQwen();
chat.ChatQwenMain().GetAwaiter().GetResult(); ;
}
}
public class ChatQwen
{
static string StorageFolder => Path.GetFullPath($"./storage-{nameof(ChatQwen)}");
static bool StorageExists => Directory.Exists(StorageFolder) && Directory.GetDirectories(StorageFolder).Length > 0;
string modelPath = Path.Combine(Directory.GetCurrentDirectory(), "qwen2.5-3b-instruct-q4_k_m.gguf");
public async Task ChatQwenMain()
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine(
"""
This program uses the Microsoft.KernelMemory package to ingest documents
and store the embeddings as local files so they can be quickly recalled
when this application is launched again.
""");
IKernelMemory memory = CreateMemoryWithLocalStorage(modelPath);
Console.ForegroundColor = ConsoleColor.Yellow;
if (StorageExists) {
Console.WriteLine(
"""
Kernel memory files have been located!
Information about previously analyzed documents has been loaded.
""");
}
else
{
Console.WriteLine(
"""
Existing kernel memory was not found.
Documents will be analyzed (slow) and information saved to disk.
Analysis will not be required the next time this program is run.
Press ENTER to proceed...
""");
Console.ReadLine();
await IngestDocuments(memory);
}
}
private static IKernelMemory CreateMemoryWithLocalStorage(string modelPath)
{
InferenceParams infParams = new() { AntiPrompts = ["\n\n"] };
LLamaSharpConfig lsConfig = new(modelPath) { DefaultInferenceParams = infParams };
var parameters = new ModelParams(modelPath)
{
ContextSize = 2048,
GpuLayerCount = 99,
MainGpu = lsConfig.MainGpu,
SplitMode = lsConfig.SplitMode
};
SearchClientConfig searchClientConfig = new()
{
MaxMatchesCount = 1,
AnswerTokens = 100,
};
TextPartitioningOptions parseOptions = new()
{
MaxTokensPerParagraph = 300,
// MaxTokensPerLine = 100,
OverlappingTokens = 30
};
SimpleFileStorageConfig storageConfig = new()
{
Directory = StorageFolder,
StorageType = FileSystemTypes.Disk,
};
SimpleVectorDbConfig vectorDbConfig = new()
{
Directory = StorageFolder,
StorageType = FileSystemTypes.Disk,
};
Console.ForegroundColor = ConsoleColor.Blue;
Console.WriteLine($"Kernel memory folder: {StorageFolder}");
Console.ForegroundColor = ConsoleColor.DarkGray;
return new KernelMemoryBuilder()
.WithSimpleFileStorage(storageConfig)
.WithSimpleVectorDb(vectorDbConfig)
.WithLLamaSharpDefaults(lsConfig)
.WithSearchClientConfig(searchClientConfig)
.With(parseOptions)
.Build();
*/
}
//... others same as KernelMemorySaveAndLoad.cs
}
}
Environment & Configuration
- Operating system: Win10 / Win11
- .NET runtime version: .NET 8
- LLamaSharp version:0.23.0
- CUDA version (if you are using cuda backend): 12.4 / 12.6
- CPU & GPU device: Nvidia RTX 4060 Laptop / Nvidia RTX 4090 Laptop
Known Workarounds
It seems to be related with llama.cpp. Because I found possible related issue in llama.cpp: #12517. And the BUG in llama.cpp just resolved last week in #12545.
Metadata
Metadata
Assignees
Labels
staleStale issue will be autoclosed soonStale issue will be autoclosed soon