Portfolio Chatbot
1 · Motivation 
I wanted something smarter than a static “About Me” page—
a way for visitors to ask me anything about my background, projects, or skills,
and for me to know when someone’s genuinely interested.
But I didn’t want to:
- hard-code a brittle FAQ
- pay for a full vector database
- miss potential leads while I’m offline
So I built a lean, 150-line Python assistant that:
- loads my résumé from a simple JSON file (no DB)
- routes queries to the right section (
projects,skills, etc.) - calls GPT-4o-mini using OpenAI function calling
- alerts me via Pushover when someone submits a question it can’t answer, or shares their email
The result is a minimal, self-contained chatbot for my personal site that runs without a database and only needs an OpenAI key and a Pushover token to function.
2 · Architecture 
Key decisions
| Problem | Why not X | Solution |
|---|---|---|
| Retrieval for a 5 KB résumé | Vector DB overkill | Regex router + load JSON in-memory |
| Unknown questions | Manual inbox checking | record_unknown_question tool → push |
| Lead capture | Contact form spam | GPT asks for email + one-liner, then record_user_details |
3 · Knowledge Base Structure 
data/chunks.json
[
{ "id": "summary",
"section": "summary",
"text": "Bangalore-based CS undergrad focusing on applied ML…" },
{ "id": "exp_videoverse",
"section": "experience",
"text": "ML Engineer Intern — VideoVerse (Aug–Nov 2024)…" },
{ "id": "prj_fire_detection",
"section": "project",
"repo": "https://github.com/NeuralNoble/fire-detection",
"text": "Drone fire-detection system using YOLOv8-Nano…" },
{ "id": "skills_core",
"section": "skills",
"text": "Python · PyTorch · TensorFlow · AWS · Docker…" }
]
Note
Keep each chunk ≤ 350 tokens so three of them + the prompt fit comfortably under GPT-4o-mini’s context window.
4 · Core Logic 
Router & context builder
Pushover helpers
def push(msg): ...
def record_user_details(email, name="n/a", notes=""): push(...)
def record_unknown_question(question): push(...)
LLM call
messages = [
{"role":"system", "content": persona},
{"role":"system", "content": f"Context:\n{ctx}"},
{"role":"user", "content": user_msg}
]
response = openai.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=TOOLS,
max_tokens=256
)
5 · Running Locally 
git clone https://github.com/NeuralNoble/portfolio-bot
cd portfolio-bot
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# add OPENAI_API_KEY + Pushover keys to .env
python chat.py
Open http://localhost:7860 and start a conversation.
6 · Things I’d Improve Next 
- Swap regex routing for a MiniLM classifier to catch synonyms (“stack”, “tech”).
- Add a Whisper endpoint for voice queries.
- Cache answers in SQLite to cut token cost for repeated FAQs.
Questions, suggestions, or want to fork it? Ping me on LinkedIn or open an issue on GitHub.
