From Prompting Fundamentals to Advanced Persona-Driven Workflows
Antreas Antoniou
Today we'll focus on Cursor, an "AI-first" code editor built for deep AI collaboration.
But it's helpful to know the other major players:
While other tools are powerful, Cursor is unique in its focus on a deep, project-aware, and configurable AI partnership.
It treats the AI not as a command-line tool, but as a genuine collaborator that can be taught and molded.
The core of this is the .cursorrules file, which allows us to move beyond simple prompting into creating persistent, persona-driven assistants. This is what we'll be mastering today.
calculate_average that takes a list of numbers and returns a float."It's All Statistics: An LLM isn't thinking; it's a powerful next-token prediction engine. Its goal is to find the most statistically likely response based on the massive amount of text and code it was trained on.
Simulated Understanding: The "understanding" you perceive is an emergent property of these patterns. This is why psychological cues work. By providing context, you guide the model toward a better statistical neighborhood.
Finding the Vibe (The Art of Context): Think of it like tuning a radio or finding a dance partner. Your goal is to provide the perfect context to lead the AI to the right answer. This includes:
Both rule sets are critical for efficiency. The key is to layer them correctly.
Define your universal principles and personal style here. How do you, the developer, always want the AI to behave? (e.g., "Always use pathlib for paths," "Never use single-letter variable names.")
.cursorrules)Define project-specifics here. This file overrides the global rules and should contain the project's tech stack, coding conventions, and the specific AI persona for this context.
Now, I'll walk you through my personal setup and workflow:
.cursorrules in Action: A look at my global rules and how I use the rule_templates to switch personas for different tasks.A good rules file goes beyond simple instructions. It defines a persona, a philosophy, and a workflow. The possibilities are endless, but here are some key sections to consider:
pytest for tests," "Default to FastAPI for new services...")black," "Use conventional commit messages...")auth.service.internal...")The most powerful .cursorrules files include a directive for the AI to help improve its own rules.
**Proactive Rule Updates:** If you identify a significant piece of feedback... you MUST proactively propose an update to this .cursorrules file.
This creates a feedback loop where your assistant learns and grows with the project.
This is where AI assistance becomes truly transformative. Codify your own context, goals, and even your values into the rules to build a partner that understands and complements your unique workflow.
The Golden Age: At the start of a chat, the context is small and clean. The AI is highly focused and effective.
The "Lost in the Middle" Problem: As the chat history grows, models tend to pay more attention to the very beginning and the very end of the conversation. This is a known architectural limitation, and critical details mentioned in the middle can get "lost" or ignored.
Catastrophic Forgetting: Eventually, the context becomes so large and noisy that performance degrades significantly. The AI forgets key constraints or reverts to generic behavior. This is your cue to start a new session.
When to Reset: Start a new session when the task changes significantly, or when you notice the AI is consistently forgetting instructions. Don't be afraid to start fresh!
The "Context Carry-Over" Technique:
Export Chat feature to get a full transcript of your conversation.Context Strategy: Full Ingestion.
For small projects, the AI can and should ingest every file. Use the @ symbol in Cursor to add the whole directory to the context.
Pro-Tip: For speed, you can write a simple script (e.g., Python, shell) to concatenate all relevant files (.py, .md, etc.) into a single .txt file. You can then provide this single file to the AI, which is much faster than having it read files one by one.
Context Strategy: Efficient Partial Ingestion.
The entire codebase might be too large to fit in the context window.
Focus on providing the most relevant parts: the core modules you're working on, the API definitions, the database schema, and key README files.
Again, a script can be used to gather these key files into a single context summary for the AI.
Context Strategy: The "Directory Map" Approach.
Full ingestion is impossible and undesirable.
The Strategy: Use a script to map the entire directory structure, including function and class signatures from key files, into a single markdown file (directory_map.md).
Workflow:
directory_map.md. It now has a high-level "map" of the entire codebase.Beyond the .cursorrules, you can fine-tune the editor itself in the settings:
.cursorrules file to use as a default.There's no single "best" model. Use the right tool for the job.
Your go-to for most tasks. Excellent for its large context window, making it ideal for ingesting and reasoning over entire codebases or large documents.
An exceptional planner and reasoner. Use it for breaking down complex problems, planning software architecture, or for general-purpose creative and logical tasks.
When your primary model gets stuck or provides a confusing answer, switch to Sonnet. It often provides a different perspective that can help you get unstuck.
The model for when things get serious. Use it for your most difficult software engineering problems. While its context handling may be less broad, its implementation quality is top-tier.
The real power comes from teaching your assistant to use tools you build for it.
Build helpers like ingest_content.py or map_directory.py. Then, teach the AI to use them in your .cursorrules file, allowing you to trigger complex actions with natural language.
For advanced uses, Cursor can interact with external servers (MCPs) you create. This gives the AI persistent state, memory, and access to other APIs, making it a true extension of your workflow.
To gain massive efficiency, you need a structured, responsive partnership with your AI. This is a workflow that balances AI speed with human oversight.
features_built/feature_name/milestones.md file to track its own progress.This is a catastrophic failure mode. It begins when a test fails and the AI is unsure what is correct: the test or the feature.
The Spiral:
You are the Arbiter of Truth. You must explicitly tell the AI which part is correct.
if __name__ == "__main__":). Run them and inspect the output together.Goal: Build a cool, simple website about any topic you want.
Steps:
python -m http.server) or another language of your choice to serve the site locally.Goal: Tackle a more complex, multi-file task together.
We will brainstorm a feature for an existing codebase. I will drive, and you can follow along on your own machine, tackling the same problem or a similar one. We will apply the structured workflow we just learned to build and test the feature safely and efficiently.
System Prompt Examples:
Past Talks on LLM Prompting/Usage:
Deep Dive into LLMs:
Q&A