Ollama Models Api

Read the docs
llama
eas/openchat

q4_k_m quantization only. Now using 8k context size

79 Pulls
8 months ago
1 Tags