The problem
Hobby Lobby's Corporate Information Systems Network Security team had MISP (Malware Information Sharing Platform) available but no standardized deployment pattern, no automated community-feed ingestion, and no documented onboarding for new contributors. Threat intelligence is a force-multiplier that's only as good as how reliably it gets operationalized — when analysts are pulling indicators manually when they remember to, the value is barely above zero.
The team needed three things:
- A reproducible MISP deployment that didn't require tribal knowledge to spin up
- Automated ingestion from a curated set of community threat feeds
- Clear documentation so the next contributor — intern, analyst, or new hire — could be productive without starting from scratch
That was my project for the summer.
Why this internship mattered to me
This was my first time doing security work in a Fortune 500 environment. My background up to that point was Microsoft 365 administration, IT support, and lab-based offensive practice (TryHackMe, Hack The Box, the usual). What I needed — what every entry-level cybersecurity candidate needs but rarely gets — was time inside a working SOC, on real infrastructure, with real signals.
Working embedded with the Network Security team, I got to see things you can't learn from a course: how alerts actually get prioritized when a dozen fire at once, what "tuning" a detection looks like in production, how analysts decide when something is a real threat versus benign noise that happens to look suspicious. The MISP project was important; the daily proximity to working analysts was where most of the learning happened.
What I built
Three deliverables, in roughly this order:
1. Containerized deployment
Rebuilt the MISP environment as a Docker-Compose-based deployment — MISP server, database, cache, all wired together with a single command. Configuration that had previously lived in someone's head moved into a checked-in template file and a documented setup process.
The win wasn't speed. The win was repeatability. The team could spin up a clean instance for testing in minutes, and rolling back a broken change was trivial. Before, both of those operations were "ask the person who set it up."
2. Feed ingestion automation
MISP is most valuable when it's continuously pulling and enriching indicators from external sources. I wrote Python scripts to pull from a curated list of community threat feeds — CIRCL OSINT, abuse.ch (URLhaus, ThreatFox), AlienVault OTX, and a few others — normalize their formats, and push the indicators into MISP on a scheduled cadence.
Each feed had its own quirks: different rate limits, different schemas, different reliability profiles. The scripts handled them per-source with proper error handling and logging, so a partial feed failure didn't take down the whole ingestion run.
3. Documentation
A setup runbook for the deployment, an analyst quick-start guide for using MISP in daily work, and architecture notes explaining the design choices for future maintainers. This was the deliverable I expected to spend the least time on and ended up spending the most. Good documentation is harder than it looks.
The stack
- MISP (containerized)
- Docker, Docker-Compose
- Python (
requests,pymisp) - Linux (Ubuntu)
- Splunk (downstream consumer of MISP indicators)
- Bash for scheduling and orchestration glue
What surprised me
The hardest part wasn't the technology — it was the threat-feed governance. I'd assumed "pull from open-source threat feeds" was a solved problem and I'd just be writing glue code. What actually took the most thought was deciding which feeds to trust, which indicators to auto-ingest versus flag for analyst review, and how to prevent low-quality feeds from creating alert fatigue downstream in the SIEM. That's a judgment call problem, not an engineering problem.
Documentation wasn't a separate task — it was the work. I'd thought of docs as "the writeup at the end." In practice, every time I documented something, I found a design decision that didn't survive being written down. Good docs forced me to make better choices in the implementation. The docs were the spec, not the postmortem.
SOC analysts care about different things than security researchers do. I came in with a researcher's mindset: novel techniques, sophisticated attack chains, interesting findings. The SOC team cared about: was the alert reproducible, what's the false-positive rate, what's the runbook, can the next-shift analyst pick it up at 3am. That's a different value system, and learning it changed how I think about security work — and is a big part of why I now write about the bridge between SOC and AI red teaming.
What I'd do differently
A queue-based worker for ingestion. The feed ingestion script was a single cron-triggered Python process. That's fine when feeds are healthy. When a feed is slow or returning errors, it slows everything else down. A queue-based worker (Celery + Redis, or similar) would isolate per-feed failures and scale better as more sources got added.
Per-feed reliability scoring. Some community feeds are higher signal-to-noise than others. The current ingestion treats them all equally. A scoring system that tracked per-feed signal quality over time and adjusted ingestion priority accordingly would be a real upgrade.
Tighter integration with Splunk. I designed for MISP-as-source-of-truth, but in practice analysts spent most of their time in Splunk. A closer integration — surfacing MISP context inline on Splunk alerts via lookups — would have meant the threat intel I was ingesting actually got used in real triage, not just sat there waiting to be queried.
Outcome
By the end of the internship: a reproducible MISP environment with daily auto-updated indicators from a curated set of community feeds, plus documentation that the next contributor could actually use. I presented the work to senior IT staff at the end-of-internship review.
The bigger outcome was personal. Four months embedded with a working SOC taught me what I'd been trying to learn from books and labs for two years — what security operations actually look like when production is on the line. The technical skills (Splunk, MISP, Docker, Python automation) transferred cleanly into my next chapter of work in AI security and adversarial evaluation. The operational skills — triage discipline, runbook thinking, how to write a detection that survives a real shift — are the ones I rely on most.
Where this fits
This work lives in my SOC & Defensive Security Portfolio on GitHub, alongside other defensive security case studies and methodology notes. Specific findings, indicators, internal architecture details, and any client-confidential material are not published — that's all under standard internship NDA. The methodology and lessons are public; the proprietary details remain with the team.
If you're a hiring manager evaluating candidates for an entry-level SOC role and want to talk about what this work taught me, the fastest way is the contact form or email.