AI-Assisted Development: A Crash Course

From Prompting Fundamentals to Advanced Persona-Driven Workflows

Antreas Antoniou

Introduction to the Players

Today we'll focus on Cursor, an "AI-first" code editor built for deep AI collaboration.

But it's helpful to know the other major players:

  • Codex Suite (OpenAI): Not just a single model, but a suite of tools. This includes a powerful cloud-based agent for tackling large tasks and the open-source Codex CLI for pairing with a lightweight agent in your terminal. Source, GitHub
  • Claude Code (Anthropic): A powerful CLI collaborator you install directly into your terminal. It uses agentic search to understand your entire codebase and can handle entire workflows from issue to PR. Source

Why Cursor for This Tutorial?

While other tools are powerful, Cursor is unique in its focus on a deep, project-aware, and configurable AI partnership.

It treats the AI not as a command-line tool, but as a genuine collaborator that can be taught and molded.

The core of this is the .cursorrules file, which allows us to move beyond simple prompting into creating persistent, persona-driven assistants. This is what we'll be mastering today.

Principles of Effective Prompting

How LLMs "Think": A Practical Mental Model

It's All Statistics: An LLM isn't thinking; it's a powerful next-token prediction engine. Its goal is to find the most statistically likely response based on the massive amount of text and code it was trained on.

Simulated Understanding: The "understanding" you perceive is an emergent property of these patterns. This is why psychological cues work. By providing context, you guide the model toward a better statistical neighborhood.

Finding the Vibe (The Art of Context): Think of it like tuning a radio or finding a dance partner. Your goal is to provide the perfect context to lead the AI to the right answer. This includes:

  • The right code snippets.
  • A clear instruction and goal.
  • A specific persona or role.
  • Examples of the desired output.

Unlocking Your Assistant's Potential

Global vs. Project Rules: A Strategy

Both rule sets are critical for efficiency. The key is to layer them correctly.

Global Rules (User Settings)

Define your universal principles and personal style here. How do you, the developer, always want the AI to behave? (e.g., "Always use pathlib for paths," "Never use single-letter variable names.")

Project Rules (.cursorrules)

Define project-specifics here. This file overrides the global rules and should contain the project's tech stack, coding conventions, and the specific AI persona for this context.

Live Demo: Cursor Deep Dive

Now, I'll walk you through my personal setup and workflow:

  • Cursor Settings: How I configure the IDE itself for maximum efficiency.
  • Environment & Shortcuts: Useful configurations in my shell and key shortcuts I use daily.
  • My .cursorrules in Action: A look at my global rules and how I use the rule_templates to switch personas for different tasks.

Anatomy of a Good Rules File

A good rules file goes beyond simple instructions. It defines a persona, a philosophy, and a workflow. The possibilities are endless, but here are some key sections to consider:

  • ➔ Persona: Who is the AI? ("You are a Senior DevOps Engineer...")
  • ➔ Interaction Guidelines: How should it behave? ("Be concise," "Ask clarifying questions...")
  • ➔ Core Tech Stack: What tools should it prefer? ("Always use pytest for tests," "Default to FastAPI for new services...")
  • ➔ Coding Style: Enforce project standards. ("Format code with black," "Use conventional commit messages...")
  • ➔ Personal Pitfalls: Help you with your own habits. ("If I am over-engineering, remind me of the MVP principle...")
  • ➔ Project-Specific Data: Give it knowledge it couldn't have otherwise. ("The user authentication service is located at auth.service.internal...")

The Self-Improving Assistant

The most powerful .cursorrules files include a directive for the AI to help improve its own rules.

**Proactive Rule Updates:** If you identify a significant piece of feedback... you MUST proactively propose an update to this .cursorrules file.

This creates a feedback loop where your assistant learns and grows with the project.

An AI for Your Brain: Tailoring Rules to YOU

This is where AI assistance becomes truly transformative. Codify your own context, goals, and even your values into the rules to build a partner that understands and complements your unique workflow.

  • ➔ Your Background: Give the AI your resume. "My background is in physics, not front-end. Explain CSS concepts step-by-step using analogies from physics if possible."
  • ➔ Your Goals: Tell the AI your "why". "My primary goal is to learn Rust. Prioritize idiomatic Rust solutions and explain the concepts as we go."
  • ➔ Your Cognitive Style: How do you work best? "I'm a visual thinker; you must generate a Mermaid diagram when we discuss architecture."
  • ➔ Your Pitfalls: What are your common traps? "I have ADHD; if I go on a tangent, gently guide me back to our current objective." or "I struggle with perfectionism; if I debate minor details, remind me that 'perfect is the enemy of good'."
  • ➔ Your Value System: What principles guide your work? "My highest value is user privacy. Always challenge any feature that collects user data and propose the most private-by-default implementation." or "I value open-source; prefer solutions that use open-source libraries over proprietary ones."

Why AI Chats "Age"

The Golden Age: At the start of a chat, the context is small and clean. The AI is highly focused and effective.

The "Lost in the Middle" Problem: As the chat history grows, models tend to pay more attention to the very beginning and the very end of the conversation. This is a known architectural limitation, and critical details mentioned in the middle can get "lost" or ignored.

Catastrophic Forgetting: Eventually, the context becomes so large and noisy that performance degrades significantly. The AI forgets key constraints or reverts to generic behavior. This is your cue to start a new session.

When and How to Start a New Session

When to Reset: Start a new session when the task changes significantly, or when you notice the AI is consistently forgetting instructions. Don't be afraid to start fresh!

The "Context Carry-Over" Technique:

  1. Use the Export Chat feature to get a full transcript of your conversation.
  2. Open the transcript and manually copy the most critical pieces of context: key decisions, final code snippets, and important constraints.
  3. This curation is the most important step. A small, highly-relevant context is far more powerful than a large, noisy one.
  4. Paste this curated context as the very first prompt in your new chat session to get the AI up to speed instantly.

Small Projects & "From Scratch"

Context Strategy: Full Ingestion.

For small projects, the AI can and should ingest every file. Use the @ symbol in Cursor to add the whole directory to the context.

Pro-Tip: For speed, you can write a simple script (e.g., Python, shell) to concatenate all relevant files (.py, .md, etc.) into a single .txt file. You can then provide this single file to the AI, which is much faster than having it read files one by one.

Medium-Sized Projects

Context Strategy: Efficient Partial Ingestion.

The entire codebase might be too large to fit in the context window.

Focus on providing the most relevant parts: the core modules you're working on, the API definitions, the database schema, and key README files.

Again, a script can be used to gather these key files into a single context summary for the AI.

Large-Scale & Legacy Projects

Context Strategy: The "Directory Map" Approach.

Full ingestion is impossible and undesirable.

The Strategy: Use a script to map the entire directory structure, including function and class signatures from key files, into a single markdown file (directory_map.md).

Workflow:

  1. Start your session by giving the AI the directory_map.md. It now has a high-level "map" of the entire codebase.
  2. When you need to work on a specific feature, ask the AI: "Based on the map, which files are most relevant for implementing X?"
  3. Add only those specific files to the context. This allows you to navigate massive codebases effectively.

Configuring Your Cursor IDE

Beyond the .cursorrules, you can fine-tune the editor itself in the settings:

  • AI Model Selection: Choose your preferred models (e.g., GPT-4o, Claude 3 Opus).
  • Temperature Settings: Adjust the "creativity" vs. determinism of the AI.
  • Editor & Linter Integrations: Configure format-on-save and other helpers.
  • Global Rules File: Point to a global .cursorrules file to use as a default.

Choosing Your AI Collaborator

There's no single "best" model. Use the right tool for the job.

Gemini 2.5 Pro (Daily Driver)

Your go-to for most tasks. Excellent for its large context window, making it ideal for ingesting and reasoning over entire codebases or large documents.

GPT-4o (The Planner)

An exceptional planner and reasoner. Use it for breaking down complex problems, planning software architecture, or for general-purpose creative and logical tasks.

Claude 4 Sonnet (Second Opinion)

When your primary model gets stuck or provides a confusing answer, switch to Sonnet. It often provides a different perspective that can help you get unstuck.

Claude 4 Opus (The Specialist)

The model for when things get serious. Use it for your most difficult software engineering problems. While its context handling may be less broad, its implementation quality is top-tier.

Supercharging Your Workflow

The real power comes from teaching your assistant to use tools you build for it.

1. Create Your Own Scripts

Build helpers like ingest_content.py or map_directory.py. Then, teach the AI to use them in your .cursorrules file, allowing you to trigger complex actions with natural language.

2. Connect to External Services

For advanced uses, Cursor can interact with external servers (MCPs) you create. This gives the AI persistent state, memory, and access to other APIs, making it a true extension of your workflow.

The 10x Workflow: A Structured Approach

To gain massive efficiency, you need a structured, responsive partnership with your AI. This is a workflow that balances AI speed with human oversight.

Phase 1: Planning & Alignment

  1. Discuss the Feature: Massage the ideas back and forth. Crucially, tell the AI: "Do not act until I approve the plan."
  2. Demand a Plan: Ask for a plan, architecture, and stack if it's complex.
  3. Create Milestones: Have the AI create a features_built/feature_name/milestones.md file to track its own progress.

Phase 2: Test-Driven Development

  1. Define Interfaces & Write Tests First: Decide on the function/class interfaces before implementation. Then, ask the AI to write tests.
  2. Skim-Review the Tests: Ensure the tests cover the main use cases and edge cases. This is your safety net.

Phase 3: Implementation & Verification

  1. Step-by-Step Implementation: Ask the AI to develop code one step at a time. DO NOT SKIP THE REVIEW.
  2. Warning: Blindly accepting code is how you lose work. The AI may "fix" a trivial issue by deleting a complex module if left unsupervised.
  3. Run ALL Tests: After every significant change, run the full test suite. Repeat until all tests pass.

Warning: The "Ground Truth" Spiral

This is a catastrophic failure mode. It begins when a test fails and the AI is unsure what is correct: the test or the feature.

The Spiral:

  1. You tell the AI "fix the tests."
  2. The AI sees a flawed test and changes your working feature code to be incorrect to satisfy the bad test.
  3. This change causes other, correct tests to fail.
  4. You say "fix the new failing tests." The AI, now believing the feature is wrong, continues to "fix" or delete other features.

You are the Arbiter of Truth. You must explicitly tell the AI which part is correct.

Phase 4: Finalization

  1. Verify with Examples: Have the AI write simple execution examples (e.g., in if __name__ == "__main__":). Run them and inspect the output together.
  2. Document and Comment: Once everything is working, ask the AI to thoroughly document the code.

Workshop 1: 5-Minute Website

Goal: Build a cool, simple website about any topic you want.

Steps:

  1. Brainstorm (1 min): If you're stuck for ideas, ask your AI to generate some and "dance" with it until you find one you like.
  2. Build (4 mins): Ask the AI to generate the HTML, CSS, and JavaScript.
  3. Serve: Use Python's built-in web server (python -m http.server) or another language of your choice to serve the site locally.

Workshop 2: The Co-Developer

Goal: Tackle a more complex, multi-file task together.

We will brainstorm a feature for an existing codebase. I will drive, and you can follow along on your own machine, tackling the same problem or a similar one. We will apply the structured workflow we just learned to build and test the feature safely and efficiently.

Further Resources

System Prompt Examples:

Past Talks on LLM Prompting/Usage:

Deep Dive into LLMs:

Discussion & Questions

Q&A