8000 Fix model loading time through prefetching the file on another thread by CoderRC · Pull Request #734 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

Fix model loading time through prefetching the file on another thread #734

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 11 commits into from
Closed
Prev Previous commit
Trying again to fix error on windows compilation C2589: '(': illegal …
…token
  • Loading branch information
CoderRC authored Apr 3, 2023
commit 32d0fe7e92b393749bc9c9bdd5b05033a15c47e4
22 changes: 21 additions & 1 deletion ggml.h
Original file line number Diff line number Diff line change
Expand Up @@ -779,7 +779,27 @@ int ggml_cpu_has_vsx(void);

#if defined(_WIN32) && !defined(_POSIX_THREADS)
#define WIN32_LEAN_AND_MEAN
#include <handleapi.h>
#if !defined(min) && !defined(max)
#include <Windows.h>
#ifdef min
#undef min
#endif
#ifdef max
#undef max
#endif
#elif defined(min) && defined(max)
#include <Windows.h>
#elif !defined(min)
#include <Windows.h>
#ifdef max
#undef max
#endif
#elif !defined(max)
#include <Windows.h>
#ifdef min
#undef min
#endif
#endif
#else
#include <unistd.h>
#endif
Expand Down
0