This is my living security, privacy, and AI journal.

Welcome to my security and privacy journal

This is a timeline of my journey within security, privacy, and AI —certifications, small projects, things I’ve learned.


Not polished case studies—battle scars, progress, receipts.

WHY

As a designer and engineer I want the software products and AI solutions I build to be trusted, stable, and safe from the start.


Through this journal I explore creating tools and systems that are both usable, trustworthy, and compliant.


To deepen this work, I'm also experimenting with how to build machine learning models and implement AI tools effectively and responsibly.

SECURITY

/ PRIVACY

Read Security + Privacy Product Playbook →

Updated every week with new content I’m learning from my Security+ and IAAP CIPT courses

Security+ In Progress

Gives me the foundation to protect systems

IAAP CIPT In Progress

Helps me design with privacy baked in — not patched on later.

MACHINE LEARNING

📌 Overview of the ML Process - 9/12/25

📌 Stream of design thoughts - 9/14/25

Keeping User In The Loop within Cursor, AI Tooling

  • Motion signal activity - Animation is important for UI design of AI Agents. Especially telling the user when a certain action is being completed, there needs to be a loading animation.

    • Ex: The “thinking” copy was interesting, letting me know they’re processing my information before executing, want to research more about that human-like reassurance

    • Real-time activity logs give confidence- Letting the user know exactly what’s going on helps demystify the “magic” of AI and give the user confidence the AI is working, by seeing it actively processing. Additionally, the developer can keep up with what the AI is doing in real-time.

    • Notifications- The user has the freedom to move onto other tasks while the AI agent processes the user’s tasks

    • Frustration: At times I was unsure of whether the AI Agent was actively taking my request. Cursor 3 dot small jump animation was way too small for me to see. I had to ask the AI if it started the process yet and it said yes then showed in real-time everything it was doing

    • How UI state affects user understanding:

      • No motion + no context + long wait time = did not receive input

      • Motion + no context + long wait time= input received, but possibly stuck on something

      • Motion + consistent context + long wait time = input received, process naturally takes long

  • Agent performance-related metrics

  • Letting the user know certain metrics that could influence the user experience (i.e. 88% of context used).

📌 Building and training my first model (that doesn't know the difference between cubism and surrealism) - 9/16/25

Worked on trying to follow the instructions I took down watching the Fast.AI 1.1 video.


Started to follow the instructions then came across a duckduckgo api error. Saw the duckduckgo doesn’t like scrapers. I thought about how to get data into the Kaggle notebook just to get some kind of result from the model.


Decided to upload my own photos to create an AI Art Critic to tell the difference between different art styles. I created 4 different folders that had 5 images each for a dataset of 20 images, which is a small set, but it was something to work with.


With this I came across a “ValueError: This DataLoader does not contain any batches” error, but I did some debugging and the images were present, but Kaggle kept displaying the error that there weren’t any batches to load.


Found out it was because I had a smaller dataset and so Fast.AI wasn’t registering my images. So I changed bs=32 to bs=4 to accommodate for the smaller data size.


With this I was able to view some images for the dataset, so I moved onto the model training using the code in the Fast.AIKaggle notebook.


My results were terrible as expected of a smaller data set: Overfitting

Sample images identified from the dataset

Model training results

Although the error rate did start to get better as it made more iterations.


I tried testing my model on other images via url, I used 2 cubism images and they both returned as ‘Surrealism’, with over .9 confidence which is obviously wrong, but it was interesting to load my own dataset, create my model, view the result, and even test it on new data, even though it was wrong.

More insights to come! ❤️ 🛠️

©2025 Brandi Nichols. All Rights Reserved.

©2025 Brandi Nichols.

All Rights Reserved.