Run MemGPT 🧠with Local LLMs | Without OpenAI
In this video, I will show you how to run MemGPT with Local LLMs by serving them through an API server (Textgen WebUI). I will walk you through a step by step process.
Commands:
Running WebUI API:
python server.py --api --api-blocking-port 5050 \
--model airoboros-l2-70b-3.1.2.Q4_K_M.gguf\
--loader llama.cpp --n-gpu-layers 1 --n_ctx 4096 \
--threads 8 --threads-batch 8 --n_batch 512
Let's Connect:
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: https://discord.com/invite/t4eYQRUcXB
💼Consulting: https://calendly.com/engineerprompt/consulting-call
Links:
MemGPT website: memgpt.ai
MemGPT Repo: https://github.com/cpacker/MemGPT
MemGPT Video: https://youtu.be/AITOuBUi9pg
Timestamps:
[00:00] Intro
[01:10] Installing Textgen WebUI
[05:35] Setup MemGPT
[08:30] Oobabooga TextGen WebUI API
[11:43] Run MemGPT with Local LLMs
2023-10-27T10:30:10Z1280
https://www.youtube.com/embed/KxBWU96zfBY