← Home

Now

A snapshot of where my attention is right now — what I'm working on, building, learning, and open to. Updated as things change.

What I'm doing

AI Evaluation & Safety contract work — ongoing red-teaming and adversarial-testing engagements against frontier LLMs on independent platforms. Most of it is under NDA, but the methodology lives in my AI Red Teaming Frameworks case study: jailbreak taxonomy, prompt-injection testing framework, automated adversarial suite, and multimodal attack-surface analysis.

In parallel, I'm actively job-searching for full-time roles where the AI-security and traditional-SOC sides of my work both get used. The thesis I keep coming back to: I bring enterprise security rigor to AI safety, and AI-native thinking to security operations. Most postings ask for one side or the other; the strongest fit is the role that wants both.

What I'm building

Three things are getting most of my evening / weekend cycles right now:

  • Adverse Insight — a 3-agent contract risk analyzer (Streamlit + OpenAI) that runs Extractor → Adversarial Scorer → Negotiator passes over a contract draft. Live demo. Currently iterating on the scorer's calibration and adding edge-case test contracts.
  • AI Red Teaming Frameworks — the methodology repo that backs my red-team contract work. Keeping the jailbreak taxonomy current as new attack patterns appear (agent-abuse vectors and tool-use exploitation are the active frontier right now). The interactive Pattern Detector demo uses an early slice of it.
  • This portfoliochimaukachukwu.com is treated as a real product. Recent ships: the red-team demo, three case studies, a branded 404, sitemap and feed, an accessibility pass (skip links, focus rings, AA contrast across the dark theme), and a fresh OG card.

What I'm learning

Two threads, deliberately kept in parallel because I think they sharpen each other:

  • Frontier AI red team. Newer attack surfaces — agent-abuse and tool-use exploitation, multimodal prompt injection (image + text composite payloads), persistent-memory attacks on stateful agents. The taxonomy I maintain has had to grow to accommodate these; the regex-pattern library lags the structural categories on purpose.
  • SOC and detection depth. Splunk SPL practice, detection-engineering thinking (writing the kind of rules my Hobby Lobby MISP work would have needed), AWS and Azure security configurations, and getting fluent in tools I touched at the corporate level but didn't own. The CCEP and CTIGA pathways from Red Team Leaders have been useful pressure here.

The bet is that fluency in both sides — and the ability to translate between them — is a more durable skill than depth in either one alone. Every model upgrade obsoletes specific exploits; structural thinking about attack-and-defense doesn't.

What I'm open to

I'm actively interviewing for the following kinds of roles, weighted roughly evenly:

  • AI Security Analyst / AI Red Teamer (offensive or evaluation-focused)
  • SOC Analyst (Tier 1 / 2) at organizations that take AI threats seriously
  • GRC / AI Governance roles (NIST AI RMF, ISO 42001, OWASP LLM Top 10 territory)
  • Hybrid roles that explicitly span both sides — the "bridge" hire

U.S.-based remote, hybrid (Oklahoma City metro), or on-site for the right team. Green card holder, eligible for Secret-level clearance (don't currently hold an active one).

If anything here sounds like a fit for your team — or you'd just like to chat about AI red teaming, the AI-security/SOC bridge, or any of the projects above — reach out. The fastest channel is email: chima.ukachukwu.sec@gmail.com.

About this page

This is a /now page — a convention started by Derek Sivers: a single page that summarizes what you're currently focused on, in lieu of having to explain it in every reply to "how have you been?" If you want a sense of where I'd be if we ran into each other this month, this page is the answer.