← Back to context

Comment by embedding-shape

1 hour ago

> even if can't use it atm (not got the h/w - only 96gb on M2 Max).

Not sure if it works different on macOS, but with CUDA + DeepSeek-V4-Flash-IQ2XXS-w2Q2K-AProjQ8-SExpQ8-OutQ8-chat-v2-imatrix.gguf I can fit it within 96GB of VRAM, together with context, so theoretically I feel like you should too, unless macOS uses GB of RAM/VRAM for the OS/display by default.