Jason Pubal Jason Pubal

Building an Application Security Assistant with GenAI

Empower your development team with a self‑service AppSec advisor: this post shows you how to build a Generative AI chatbot that delivers instant, policy‑grounded guidance on secure coding and internal processes. You’ll learn how Retrieval‑Augmented Generation (RAG) uses FAISS to index OWASP and company documents, how LangChain orchestrates loading, chunking, embedding, and query‑handling, and how a carefully crafted PromptTemplate shapes answers to be precise, actionable, and traceable. We walk through a Python POC—from dropping PDFs into a folder through an interactive chat loop—and explore real‑world integrations, including embedding the bot in Slack or MS Teams and adding it to CI/CD pipelines. By the end, you’ll have both the code blueprint and the strategic vision to roll out your own enterprise AppSec chatbot.

Read More
Jason Pubal Jason Pubal

From Whiteboards to LLMs: Automating STRIDE Threat Models with GenAI

Threat modeling is critical—but manual approaches are slow and hard to scale. STRIDE GPT is an open-source GenAI tool that generates high-quality threat models from architectural context using the STRIDE framework. In this post, we’ll show how to run STRIDE GPT, use it with a demo app, and explore the prompt engineering that makes it work.

Read More
Jason Pubal Jason Pubal

Prompt Engineering in Cybersecurity: From Fundamentals to Advanced Techniques

Prompt engineering is a critical skill for cybersecurity professionals looking to leverage generative AI effectively and safely. By crafting clear, context-rich prompts, teams can enhance tasks like vulnerability analysis, secure code generation, and threat modeling—turning LLMs into powerful tools for automation, insight, and resilience.

Read More
Jason Pubal Jason Pubal

How Do LLMs Work?

Ever wondered how Large Language Models like ChatGPT actually work? They’re just predicting the next token—one step at a time! Learn how LLMs generate text, handle context, and why they sometimes hallucinate.

Read More
Jason Pubal Jason Pubal

Running a Local LLM with LM Studio and Connecting via Chatbox on Mobile

In this guide, we explored how to set up LM Studio to run a local LLM and connect it to Chatbox on a mobile device. Running an LLM locally provides benefits like privacy, offline access, and reduced latency. By following simple steps, you can install LM Studio, start an API server, and interact with the model from your phone using Chatbox. This setup empowers you with a private and customizable AI assistant on your local machine.

Read More
Jason Pubal Jason Pubal

Getting Started with the OpenAI API

The OpenAI API provides access to powerful AI models like ChatGPT, enabling you to integrate advanced natural language processing capabilities into your cybersecurity applications. Whether you want to build a chatbot, generate text, or summarize information, the OpenAI API is a great tool to get started with. This guide will walk you through setting up an OpenAI account, generating an API key, and writing a simple Python script to interact with the API.

Read More