Skip to main content

Prompt Comparison

See how small word changes reshape output. Compare vague requests with precise prompts that produce reliable results.

Be Specific

Vague requests produce vague results. Specific requests produce specific results.

Avoid
"Figure out why this code isn't working."
Better
"Deconstruct this function line by line to identify the failure point."

Think in Flows

Instead of single actions, request complete sequences with proper lifecycle management.

Avoid
"Make a function that connects to the database."
Better
"Design the complete lifecycle for database connections: startup, connection pooling, query execution, and graceful shutdown."

Demand Verification

Force the AI to verify its own work through systematic auditing processes.

Avoid
"Is this code correct?"
Better
"Conduct a hallucination audit on this implementation: verify each function works as claimed and identify any logical inconsistencies."

Use Systematic Approach

Break complex tasks into structured phases: Plan → Execute → Audit.

Avoid
"Refactor this code to be better."
Better
"Lay out a plan to refactor this authentication system. Then execute the plan in full per codebase standards. Finally, conduct a hallucination audit."

Use Recall

Explicitly reference previous context to maintain coherence across conversations.

Avoid
"What was that thing we talked about before?"
Better
"Recall the authentication pattern we discussed in message #3 and apply it to this new endpoint."

Set Boundaries

Clearly define whether you want refactoring (improve existing) or scaffolding (build structure).

Avoid
"Make a new authentication system."
Better
"Scaffold an authentication system with route stubs, middleware hooks, and type definitions—no implementation yet."

These comparisons demonstrate the power of systematic prompting approaches over ad-hoc requests.

Back to AI Resources