AI Enthusiast

I use AI as part of a deliberate stack: coding in the editor, reasoning in the browser, models on the desktop, and automation in the background. Here is the tooling I rely on, how it fits together, and what I have shipped with it.

Core stack

Primary IDE & assistant

Cursor

My main vehicle for coding and day-to-day AI-assisted work inside the repo. Cursor connects to models from major providers you enable in settings (for example Anthropic, OpenAI, and others depending on plan and region). There is no single fixed “model under the hood”; you choose what runs per task.

Research & synthesis

ChatGPT

General-purpose reasoning: search adjacent ideas, analyze long text, compare options, and filter signal from noise before I commit to an approach in code or design.

Visual manipulation

Higgsfield

Image and visual workflows: generation and manipulation when the brief needs fast visual iteration beyond static comps.

Local open-source models

Ollama

Running LLMs locally for privacy-sensitive experiments, offline prompts, and apps that call a local API without sending data to the cloud.

Process automation

n8n

Workflow automation for chaining services, schedules, and webhooks so repetitive glue work happens without babysitting scripts.

Terminal agent

OpenCode

The AI assistant I use from the terminal for agent-style runs (opencode run "…"): multi-step tasks, repo-aware edits, and automation-friendly workflows alongside the editor.

How it fits together

Cursor and your project files sit at the center. Around that: local models (Ollama), automation (n8n), research (ChatGPT), terminal agents (OpenCode), and visual tools (Higgsfield). Experiments like Google’s newer surfaces sit beside this loop.

How AI tools connect in this workflow Workflow: how the stack fits together Cursor sits on the code; local and cloud tools plug in around it. Cursor + My Projects OpenCode CLI tasks Ollama Local LLMs n8n Automation ChatGPT Research Higgsfield Visuals Experiments Antigravity, Gemini…

Additional AI experience

Google Antigravity

Google’s agent-first development surface. I use it to explore how autonomous agents, editor, and browser loops compare to my usual Cursor + terminal workflow.

Gemini Canvas

Long-form reasoning and canvas-style collaboration inside Gemini for drafts, structured breakdowns, and visual planning alongside other tools.

Stitch (with Google)

Google’s UI experiment tool for turning prompts into interface directions. Good for quick explorations before refining in Figma or code.

Certificates

Generative AI programs on Coursera (IBM, SkillUp). Open each PDF to view the full certificate.

Generative AI for UI/UX Design (Specialization)

Coursera · three-course specialization · Dec 13, 2025 Introduction & applications, prompt engineering basics, future of UX/UI design.

Generative AI: Prompt Engineering Basics

IBM · Coursera · Dec 11, 2025

Generative AI: The Future of UX/UI Design

SkillUp · Coursera · Dec 13, 2025

Generative AI: Introduction and Applications

IBM · Coursera · Dec 5, 2025

AI-assisted projects