Cross-Platform AI Workflows - Integrace Více Nástrojů
Naučte se navrhovat a optimalizovat pracovní postupy, které kombinují více AI nástrojů (ChatGPT, Claude, Copilot, Cursor, Perplexity) pro maximální efektivitu a kvalitu výstupů.
Prompt text
Můj projekt a kontext:
- Typ projektu: [TYP - např. "full-stack webová aplikace", "content marketing", "data analysis", "výzkumný projekt"]
- Hlavní úloha: [ÚLOHA - např. "vývoj nové funkce", "psaní článků", "analýza dat", "tvorba dokumentace"]
- Aktuální workflow: [SOUČASNÝ STAV - např. "používám jen ChatGPT", "manuální přepínání mezi nástroji", "neefektivní proces"]
- Dostupné nástroje: [NÁSTROJE - např. "ChatGPT Plus, GitHub Copilot, Claude Pro", "free verze", "firemní licence"]
- Časový budgeting: [ČAS - např. "2 hodiny denně na vývoj", "celý pracovní den", "side project 10h týdně"]
- Bolestivé body: [PROBLÉMY - např. "ztrácím čas přepínáním", "duplikuji práci", "nekonzistentní výstupy"]
Vytvoř mi:
1. **Analýzu Současného Stavu**
- Kde ztrácím nejvíc času
- Které části procesu jsou inefficientní
- Co by AI mohlo zrychlit/zlepšit
- Kde potřebuji lidský oversight
2. **Optimalizovaný Multi-Tool Workflow**
- Který nástroj použít pro kterou fázi
- Handoff body mezi nástroji (jak předávat kontext)
- Paralelní vs sekvenční tasking
- Context management strategie
- Templates a reusable prompts
3. **Konkrétní Workflow Patterns**
Pro každou hlavní úlohu navrhni:
- **Tool Selection:** Který AI nástroj na kterou subtask
- **Sequencing:** V jakém pořadí taskov postupovat
- **Context Passing:** Jak předávat výstupy mezi nástroji
- **Quality Gates:** Kde kontrolovat kvalitu
- **Automation Opportunities:** Co automatizovat
4. **Integration Points**
- Jak spojit Copilot/Cursor s Claude/ChatGPT
- Kdy použít Perplexity vs ChatGPT browsing
- Jak kombinovat code gen s dokumentací
- Best practices pro context sharing
- Tools a skripty pro automation
5. **Workflow Templates**
**Template 1: Software Development**
```
1. Research/Planning: Perplexity + Claude (research, architecture)
2. Code Generation: Cursor Composer + Copilot (implementation)
3. Code Review: Claude (code quality, security, best practices)
4. Documentation: ChatGPT (README, API docs, comments)
5. Testing: Cursor/Copilot (test generation, debugging)
```
**Template 2: Content Creation**
```
1. Topic Research: Perplexity (sources, data, insights)
2. Outline Creation: Claude (structure, flow, SEO)
3. Draft Writing: ChatGPT (long-form content)
4. Editing/Polish: Claude (style, consistency, quality)
5. SEO Optimization: ChatGPT (keywords, meta, structure)
```
**Template 3: Data Analysis**
```
1. Data Exploration: ChatGPT Code Interpreter (initial analysis)
2. Statistical Analysis: Claude (advanced statistics, interpretation)
3. Visualization: Python + Copilot (chart generation)
4. Insight Generation: Perplexity (context, benchmarks)
5. Report Writing: ChatGPT (executive summaries)
```
Vytvoř podobné templates pro [TYP PROJEKTU].
6. **Context Management Strategy**
- Jak organizovat konverzace
- Naming conventions pro projekty/chaty
- Context compression techniky
- Knowledge base building
- Dokumentace workflow
7. **Automation & Scripts**
- CLI tools pro rychlé přepínání
- Browser extensions pro workflow
- Keyboard shortcuts setup
- Template prompts/snippets
- Integration scripts
8. **Měření Efektivity**
- KPI tracking (čas ušetřený, kvalita, produktivita)
- Bottleneck identification
- ROI calculation per nástroj
- Continuous improvement process
9. **Common Pitfalls & Solutions**
- Context loss při přepínání
- Inconsistent outputs
- Over-reliance na jeden nástroj
- Poor handoff management
- Inefficient context sharing
10. **Advanced Patterns**
- Ensemble methods (kombinace outputs)
- Verification workflows (cross-checking)
- Iterative refinement loops
- Parallel processing
- Fallback strategies
Zaměř se na:
✅ Praktické, actionable kroky
✅ Konkrétní nástroje a jejich role
✅ Měřitelné improvement metriky
✅ Real-world příklady
✅ Quick wins + long-term optimization
Příklad použití
Vstup:
Hlavní úloha: Vývoj nových features + dokumentace + testing
Aktuální workflow: Používám jen GitHub Copilot v VS Code, všechno dělám tam
Dostupné nástroje: GitHub Copilot Individual, Claude Free, ChatGPT Free, můžu investovat $30/měsíc
Časový budgeting: 4 hodiny denně (vedlejší projekt)
Bolestivé body: Copilot je dobrý na code gen, ale špatný na plánování architektury a dokumentaci. Často musím googlovat best practices. Testing píšu manuálně a zabere to věčnost.
Výstup:
## Optimalizace Vašeho Development Procesu 🚀
---
## 1. Analýza Současného Stavu
### ⏰ Kde Ztrácíte Nejvíc Času
**Current Time Distribution (odhadované):**
| Aktivita | Čas/den | % | Inefficiency |
|----------|---------|---|--------------|
| Architecture planning | 30 min | 12.5% | ⚠️ Copilot není design tool |
| Code writing | 90 min | 37.5% | ✅ OK (Copilot silná stránka) |
| Googling best practices | 45 min | 18.75% | 🔴 Manuální research |
| Writing tests | 60 min | 25% | 🔴 Repetitivní práce |
| Documentation | 15 min | 6.25% | 🔴 Odkládáte to (nudné) |
**Total: 240 min/den, ~30% inefficient (72 min/den ztraceno)**
### 🎯 Kde AI Může Pomoci
**High-Impact Opportunities:**
1. **Architecture Planning (30 min → 10 min)** 💡
- Claude Pro: Zkušený architect, navrhne strukturu, API design
- Perplexity: Najde best practices, designové patterny
- **Savings: 20 min/den**
2. **Research & Best Practices (45 min → 10 min)** 📚
- Perplexity: Agreguje aktuální best practices s citacemi
- Claude: Vysvětlí kontext a trade-offs
- **Savings: 35 min/den**
3. **Test Writing (60 min → 20 min)** ✅
- Copilot: Generuje unit tests
- Claude: Navrhne edge cases, integration test scenarios
- **Savings: 40 min/den**
4. **Documentation (15 min → 15 min, ale BETTER)** 📝
- ChatGPT: Generuje README, API docs, code comments
- Claude: Review dokumentace, přidá chybějící části
- **Quality improvement, not time savings**
**Total Potential Savings: 95 min/den (~40% time saved)**
### 🛠️ Tool Gap Analysis
| Need | Current Tool | Gap | Better Solution |
|------|-------------|-----|-----------------|
| Architecture design | Copilot | ❌ Copilot generuje kód, ne architekturu | Claude (reasoning) |
| Code generation | Copilot | ✅ Good | Keep Copilot |
| Best practices research | Google | ❌ Slow, fragmented | Perplexity (aggregated answers) |
| Test generation | Copilot | ⚠️ Basic tests only | Copilot + Claude (edge cases) |
| Documentation | Manual | ❌ Odkládáte to | ChatGPT (batch generation) |
| Code review | Self | ❌ Missing errors | Claude (security, performance review) |
---
## 2. Optimalizovaný Multi-Tool Workflow
### 🎯 Recommended Tool Stack ($30 Budget)
**Option A: Maximum Versatility ($30/mo)**
- ✅ **GitHub Copilot Individual** ($10/mo) - Keep for code gen
- ✅ **Claude Pro** ($20/mo) - Architecture, review, complex reasoning
- ❌ **ChatGPT Free** - Use for documentation
- ❌ **Perplexity Free** - Use for quick research
**Option B: Maximum Coding Power ($30/mo)**
- ✅ **Cursor Pro** ($20/mo) - Includes Copilot-like features + Composer
- ✅ **Claude Free** - Architecture planning (100 msgs/day enough)
- ✅ **Perplexity Free** - Research when needed
- ✅ **$10 saved** or invest in ChatGPT Plus for docs
**Recommendation: Option B (Cursor Pro + Claude Free)**
- Cursor Composer replaces Copilot + VS Code
- Multi-file editing = faster feature implementation
- Claude Free enough for planning (you won't hit limits with 4h/day work)
- Perplexity Free adequate for occasional research
### 📋 Workflow Blueprint
**Phase-Based Tool Allocation:**
```
Feature Development Lifecycle → Tool Assignment
1. 📐 PLANNING (15 min)
└─ Claude: Architecture design, API contracts, data models
2. 🔍 RESEARCH (10 min)
└─ Perplexity: Best practices, libraries, design patterns
3. 💻 IMPLEMENTATION (90 min)
├─ Cursor Composer: Multi-file feature implementation
├─ Cursor Chat: Context-aware help during coding
└─ Cursor CMD+K: Quick inline edits
4. ✅ TESTING (20 min)
├─ Cursor: Generate basic unit tests
└─ Claude: Design edge case scenarios, integration tests
5. 🔎 CODE REVIEW (10 min)
└─ Claude: Security audit, performance review, best practices check
6. 📝 DOCUMENTATION (15 min)
├─ ChatGPT Free: Generate README, API docs, inline comments
└─ Claude: Review docs for completeness, clarity
Total: 160 min (vs 240 min current) = 33% faster
```
---
## 3. Konkrétní Workflow Patterns
### Pattern 1: New Feature Implementation 🆕
**Scenario:** User authentication with JWT tokens
**Step-by-Step Multi-Tool Process:**
#### Step 1: Architecture Planning (Claude, 10 min)
**Prompt Template:**
```
Context: Next.js 14 app router + Node.js Express API + PostgreSQL
Feature: JWT authentication system
Design:
1. Database schema for users table
2. API endpoints (/auth/register, /auth/login, /auth/refresh)
3. Middleware for protected routes
4. Frontend auth context structure
5. Security best practices
Consider:
- Token storage (httpOnly cookies vs localStorage)
- Refresh token rotation
- CSRF protection
- Rate limiting
- Password hashing (bcrypt)
```
**Claude Output:** Architecture design document → Save to `docs/auth-architecture.md`
#### Step 2: Best Practices Research (Perplexity, 5 min)
**Query:**
```
JWT authentication best practices 2024:
- Token expiration times
- Refresh token strategies
- Security headers
- OWASP recommendations
```
**Perplexity Output:** Curated best practices with sources → Add to architecture doc
#### Step 3: Database Schema (Cursor Composer, 10 min)
**Composer Prompt:**
```
Create PostgreSQL migration for users table based on:
[paste Claude's schema design]
Include:
- UUID primary key
- Email (unique, indexed)
- Password hash
- Created/updated timestamps
- Refresh token storage
```
**Files Created:**
- `migrations/001_create_users_table.sql`
- `migrations/002_add_refresh_tokens_table.sql`
#### Step 4: Backend Implementation (Cursor Composer, 40 min)
**Composer Multi-File Prompt:**
```
Implement JWT auth system:
1. src/models/User.js - Sequelize model
2. src/controllers/authController.js - Register, login, refresh, logout
3. src/middleware/authMiddleware.js - Protected route middleware
4. src/routes/auth.js - Auth routes
5. src/utils/jwtHelper.js - Token generation/verification
Use:
- bcrypt for password hashing
- jsonwebtoken library
- httpOnly cookies for tokens
- Express async error handling
```
**Cursor generates all 5 files simultaneously using Composer Mode**
#### Step 5: Frontend Implementation (Cursor Composer, 30 min)
**Composer Prompt:**
```
Create Next.js 14 auth system:
1. app/contexts/AuthContext.tsx - Auth provider
2. app/hooks/useAuth.ts - Auth hook
3. app/api/auth/[...nextauth]/route.ts - API routes
4. app/components/LoginForm.tsx - Login UI
5. app/components/ProtectedRoute.tsx - Route protection
6. middleware.ts - Next.js middleware for auth
Features:
- Automatic token refresh
- Loading states
- Error handling
- Redirect after login
- Persist auth state
```
**Cursor generates complete auth frontend**
#### Step 6: Testing (Cursor + Claude, 20 min)
**Cursor for Unit Tests:**
```
Generate Jest tests for:
- authController.test.js (register, login, refresh)
- authMiddleware.test.js (token validation)
- jwtHelper.test.js (token generation/verification)
Include:
- Happy path scenarios
- Error cases (invalid tokens, expired tokens)
- Edge cases (missing headers, malformed requests)
```
**Claude for Test Scenarios:**
```
Review auth implementation and suggest:
1. Integration test scenarios (full auth flow)
2. Security test cases (XSS, CSRF, timing attacks)
3. Edge cases I might have missed
4. Performance test scenarios (token refresh under load)
```
**Files Created:**
- `tests/unit/authController.test.js`
- `tests/integration/authFlow.test.js`
- `tests/security/authSecurity.test.js`
#### Step 7: Security Review (Claude, 10 min)
**Claude Security Audit Prompt:**
```
Review this JWT auth implementation for security issues:
[paste key files: authController.js, authMiddleware.js]
Check for:
- SQL injection vulnerabilities
- XSS risks
- CSRF protection gaps
- Insecure token storage
- Missing rate limiting
- Weak password requirements
- Timing attack vulnerabilities
- Missing security headers
Provide:
- Security score (1-10)
- Critical issues (must fix)
- Recommended improvements
- OWASP compliance check
```
**Claude Output:** Security audit report → Fix critical issues
#### Step 8: Documentation (ChatGPT Free, 10 min)
**ChatGPT Prompt:**
```
Generate documentation:
1. README.md section:
- How to set up auth
- Environment variables needed
- Usage examples (register, login, protected routes)
2. API.md:
- Endpoint documentation
- Request/response examples
- Error codes
3. SECURITY.md:
- Security considerations
- Best practices for deployment
- Token management guidelines
Based on code:
[paste authController.js and routes]
```
**ChatGPT Output:** Complete documentation → Review with Claude for accuracy
#### Step 9: Integration & Testing (Manual + Cursor, 20 min)
1. **Manual Testing:**
- Test registration flow
- Test login flow
- Test token refresh
- Test protected routes
- Check error handling
2. **Cursor Chat for Debugging:**
```
Bug: Refresh token not rotating on /auth/refresh
Here's the current implementation:
[paste code]
Help me debug and fix.
```
---
### Pattern 2: Bug Fix & Debugging 🐛
**Scenario:** Payment processing failing intermittently
**Multi-Tool Debugging Process:**
#### Step 1: Error Analysis (Cursor Chat, 5 min)
**In Cursor, select error logs and prompt:**
```
Analyze this payment error:
[paste error stack trace]
Help me:
1. Identify root cause
2. Suggest fix
3. Prevent future occurrences
```
#### Step 2: Research Known Issues (Perplexity, 5 min)
**Query:**
```
Stripe payment webhook timeout errors Next.js:
- Common causes
- Solutions from GitHub issues
- Best practices for webhook handling
```
#### Step 3: Solution Design (Claude, 5 min)
**Claude Prompt:**
```
Design solution for Stripe webhook timeouts:
Current implementation:
[paste webhook handler code]
Requirements:
- Handle long-running operations (>30s)
- Ensure idempotency
- Implement retry logic
- Add proper logging
Suggest:
- Architecture changes
- Queue system (if needed)
- Monitoring approach
```
#### Step 4: Implementation (Cursor, 20 min)
**Cursor Composer:**
```
Implement Claude's solution:
1. Add job queue (BullMQ)
2. Refactor webhook handler
3. Add idempotency keys
4. Implement retry logic
5. Add logging/monitoring
```
#### Step 5: Testing (Cursor + Manual, 15 min)
- Cursor generates tests
- Manual testing with Stripe test mode
- Verify idempotency
- Test retry logic
**Total: 50 min (vs 2-3 hours manual debugging)**
---
### Pattern 3: Code Refactoring 🔄
**Scenario:** Legacy component needs modernization
**Refactoring Workflow:**
#### Step 1: Refactoring Plan (Claude, 10 min)
**Prompt:**
```
Analyze this React component and create refactoring plan:
[paste legacy component code - 500 lines, class-based, mixed concerns]
Goals:
- Convert to functional component
- Extract business logic to custom hooks
- Improve performance (memoization)
- Better error handling
- TypeScript types
- Test coverage
Provide:
- Step-by-step refactoring plan
- Risk assessment
- Estimated effort
- Testing strategy
```
**Claude Output:** Detailed refactoring roadmap
#### Step 2: Incremental Refactoring (Cursor Composer, 60 min)
**Phase 1: Convert to Functional (Composer):**
```
Convert UserProfile class component to functional:
- Use hooks (useState, useEffect, useCallback)
- Preserve all functionality
- Maintain props interface
```
**Phase 2: Extract Logic (Composer):**
```
Extract business logic to custom hooks:
1. useUserData - data fetching
2. useUserPermissions - permission checking
3. useUserActions - CRUD operations
Move to hooks/ directory
```
**Phase 3: Performance (Cursor CMD+K):**
```
Optimize UserProfile performance:
- Add React.memo
- Memoize callbacks with useCallback
- Memoize derived data with useMemo
- Split into smaller sub-components
```
**Phase 4: TypeScript (Cursor):**
```
Add TypeScript types:
- Props interface
- State types
- API response types
- Custom hook return types
```
#### Step 3: Testing (Cursor + Claude, 30 min)
**Cursor:** Generate React Testing Library tests
**Claude:** Design complex interaction test scenarios
#### Step 4: Review (Claude, 10 min)
**Claude Final Review:**
```
Review refactored component:
1. Code quality improvements
2. Performance gains
3. Maintainability score
4. Missing test coverage
5. Potential issues
```
**Total: 110 min (vs 4-6 hours manual refactoring)**
---
## 4. Integration Points & Context Passing
### 🔗 Seamless Tool Transitions
#### Transition 1: Claude (Planning) → Cursor (Implementation)
**Best Practice: Structured Output Format**
**Claude Prompt (ends with):**
```
Format output as:
1. File structure (tree format)
2. Each file with:
- File path
- Code implementation
- Dependencies needed
This will be directly pasted into Cursor Composer.
```
**Claude Output:**
```
File Structure:
src/
├── models/
│ └── User.js
├── controllers/
│ └── authController.js
└── middleware/
└── authMiddleware.js
--- File: src/models/User.js ---
const { DataTypes } = require('sequelize');
...
[complete implementation]
--- File: src/controllers/authController.js ---
...
```
**Cursor Composer Prompt:**
```
Implement these files:
[paste Claude's structured output]
Ensure:
- Correct imports
- Proper error handling
- TypeScript types
```
**Result:** Seamless handoff, no context loss ✅
#### Transition 2: Perplexity (Research) → Claude (Design)
**Perplexity Research Output → Notion/Docs:**
Save Perplexity findings to shared doc (e.g., Notion page)
**Claude Design Prompt:**
```
Based on this research: [Notion link or paste key findings]
Design [system/feature/architecture]
```
**Benefit:** Claude has full research context ✅
#### Transition 3: Cursor (Code) → Claude (Review)
**Export Code for Review:**
**Option A: Direct Paste (Small Changes)**
```
Review this implementation:
File: authController.js
[paste code, max ~500 lines]
Check:
- Security issues
- Performance problems
- Best practices violations
```
**Option B: Summary for Large Changes**
```
I implemented JWT auth system:
Key Files:
1. authController.js - Handles login/register/refresh
2. authMiddleware.js - Validates tokens
3. jwtHelper.js - Token generation/verification
Architecture decisions:
- httpOnly cookies for token storage
- Refresh token rotation every 7 days
- bcrypt with 12 rounds for passwords
Please review:
- Is architecture secure?
- Any critical vulnerabilities?
- Performance concerns?
- Better approaches?
[paste critical code sections only, ~200 lines]
```
**Result:** Effective review without overwhelming Claude's context ✅
#### Transition 4: Claude (Review Findings) → Cursor (Fixes)
**Claude Review Output (structured):**
```
Security Issues:
1. 🔴 CRITICAL: SQL injection in line 45
Fix: Use parameterized queries
2. ⚠️ WARNING: Missing rate limiting
Fix: Add express-rate-limit middleware
Code to fix:
--- authController.js:45 ---
BEFORE: db.query(`SELECT * FROM users WHERE email = '${email}'`)
AFTER: db.query('SELECT * FROM users WHERE email = ?', [email])
```
**Cursor Prompt:**
```
Fix these security issues:
[paste Claude's structured findings]
Apply fixes to:
- authController.js
- Add rate limiting middleware
```
**Result:** Direct, actionable fixes implemented ✅
---
### 📋 Context Management Best Practices
#### 1. Project Documentation Hub (Notion/Obsidian)
**Structure:**
```
Project Root/
├── Architecture/
│ ├── System Design.md (Claude outputs)
│ ├── API Contracts.md
│ └── Database Schema.md
├── Research/
│ ├── Best Practices.md (Perplexity findings)
│ ├── Library Comparisons.md
│ └── Design Patterns.md
├── Implementation/
│ ├── Feature Specs.md
│ └── Code Review Notes.md (Claude reviews)
└── Decisions/
└── ADRs/ (Architecture Decision Records)
```
**Workflow:**
1. Claude outputs → Save to Architecture/
2. Perplexity research → Save to Research/
3. Claude reviews → Save to Implementation/
4. Reference docs in all tool prompts
**Benefits:**
- ✅ Persistent context across sessions
- ✅ No information loss
- ✅ Easy reference in prompts
- ✅ Team knowledge sharing
#### 2. Conversation Naming Convention
**Format:** `[Tool]-[Project]-[Feature]-[Date]`
**Examples:**
- `Claude-NextApp-Auth-2024-01-15`
- `Cursor-NextApp-PaymentAPI-2024-01-16`
- `Perplexity-NextApp-StripeIntegration-2024-01-17`
**Benefits:**
- ✅ Easy to find past conversations
- ✅ Understand context at a glance
- ✅ Track decision timeline
#### 3. Cross-Tool Context Compression
**When context is too large for single prompt:**
**Technique: Summary → Details Pattern**
**Step 1:** Get Claude to summarize
```
Summarize this architecture document in bullet points:
- Key decisions
- Critical components
- Dependencies
- Constraints
[paste full doc]
```
**Step 2:** Use summary in other tools
```
Cursor Composer: Implement auth based on this design:
Summary:
- JWT tokens in httpOnly cookies
- Refresh token rotation
- PostgreSQL for user storage
- bcrypt password hashing
[add specific implementation details only where needed]
```
**Result:** Efficient context passing without token waste ✅
#### 4. Template Library
**Create Reusable Prompt Templates:**
**Template: Feature Planning (Claude)**
```markdown
# Feature Planning Template
Context:
- Project: [PROJECT_NAME]
- Tech Stack: [STACK]
- Feature: [FEATURE_DESCRIPTION]
Design:
1. Architecture overview
2. Database schema changes
3. API endpoints needed
4. Frontend components
5. Testing strategy
6. Security considerations
7. Performance implications
Output Format:
- File structure tree
- Code snippets for each file
- Dependencies to install
- Configuration changes
```
**Template: Code Review (Claude)**
```markdown
# Code Review Template
Context: [FEATURE_NAME] implementation
Files Changed:
1. [FILE_PATH] - [PURPOSE]
2. [FILE_PATH] - [PURPOSE]
Review Focus:
- ⚠️ Security vulnerabilities
- ⚡ Performance bottlenecks
- 📏 Code quality & best practices
- 🔄 Maintainability concerns
- ✅ Test coverage gaps
Provide:
- Priority ranking (🔴 Critical → 🟡 Nice-to-have)
- Specific line numbers
- Fix suggestions with code
```
**Template: Research Query (Perplexity)**
```markdown
# Research Query Template
Topic: [TOPIC]
Questions:
1. What are current best practices for [TOPIC] in 2024?
2. What are common pitfalls and how to avoid them?
3. What are the top recommended libraries/tools?
4. What do industry experts recommend?
5. Are there any security/performance considerations?
Format:
- Bullet points
- Include sources (with credibility assessment)
- Comparison tables where applicable
```
**Store templates in:** `~/ai-workflows/templates/`
**Benefits:**
- ✅ Consistent prompt quality
- ✅ Faster workflow execution
- ✅ Easier onboarding for team
- ✅ Reproducible results
---
## 5. Automation & Scripts
### 🤖 CLI Tools for Workflow Automation
#### Script 1: Multi-Tool Context Exporter
**Purpose:** Export Cursor project context for Claude/ChatGPT
**File:** `~/scripts/export-context.sh`
```bash
#!/bin/bash
# Export project context for AI tools
PROJECT_DIR=$1
OUTPUT_FILE="${PROJECT_DIR}/ai-context.md"
echo "# Project Context Export" > $OUTPUT_FILE
echo "Generated: $(date)" >> $OUTPUT_FILE
echo "" >> $OUTPUT_FILE
# File structure
echo "## File Structure" >> $OUTPUT_FILE
tree -L 3 -I 'node_modules|dist|build' $PROJECT_DIR >> $OUTPUT_FILE
echo "" >> $OUTPUT_FILE
# Key files content
echo "## Key Files" >> $OUTPUT_FILE
for file in package.json README.md tsconfig.json; do
if [ -f "${PROJECT_DIR}/${file}" ]; then
echo "### ${file}" >> $OUTPUT_FILE
echo "\`\`\`" >> $OUTPUT_FILE
cat "${PROJECT_DIR}/${file}" >> $OUTPUT_FILE
echo "\`\`\`" >> $OUTPUT_FILE
echo "" >> $OUTPUT_FILE
fi
done
# Recent git changes
echo "## Recent Changes" >> $OUTPUT_FILE
cd $PROJECT_DIR
git log --oneline -n 10 >> $OUTPUT_FILE
echo "✅ Context exported to ${OUTPUT_FILE}"
echo "📋 Ready to paste into Claude/ChatGPT"
```
**Usage:**
```bash
./export-context.sh ~/projects/my-nextjs-app
# Outputs: ~/projects/my-nextjs-app/ai-context.md
# Paste into Claude for architecture discussions
```
#### Script 2: Claude Review Automation
**Purpose:** Send code changes to Claude for automated review
**File:** `~/scripts/claude-review.sh`
```bash
#!/bin/bash
# Automated code review with Claude API
# Get changed files since last commit
CHANGED_FILES=$(git diff --name-only HEAD~1 HEAD)
# Build review prompt
PROMPT="Review these changes:
Files Changed:
${CHANGED_FILES}
Changes:
"
# Append file diffs
for file in $CHANGED_FILES; do
PROMPT="${PROMPT}
--- ${file} ---
$(git diff HEAD~1 HEAD -- $file)
"
done
PROMPT="${PROMPT}
Check for:
- Security issues
- Performance problems
- Best practices violations
- Missing tests"
# Call Claude API (requires anthropic CLI or API key)
echo "$PROMPT" | claude-api review
# Or copy to clipboard for manual review
echo "$PROMPT" | pbcopy # macOS
echo "📋 Review prompt copied to clipboard. Paste into Claude."
```
**Usage:**
```bash
git commit -m "Add auth feature"
./claude-review.sh # Auto-generates review prompt
```
#### Script 3: Workflow Session Manager
**Purpose:** Track AI tool usage and switch contexts efficiently
**File:** `~/scripts/ai-session.sh`
```bash
#!/bin/bash
# AI Workflow Session Manager
ACTION=$1
TOOL=$2
case $ACTION in
start)
# Create new session
SESSION_ID="$(date +%Y%m%d-%H%M%S)-${TOOL}"
mkdir -p ~/ai-sessions/$SESSION_ID
echo "📝 Started $TOOL session: $SESSION_ID"
echo "$SESSION_ID" > ~/.current-ai-session
# Open tool
case $TOOL in
cursor) open -a "Cursor" ;;
claude) open "https://claude.ai" ;;
chatgpt) open "https://chat.openai.com" ;;
perplexity) open "https://perplexity.ai" ;;
esac
;;
log)
# Log current context
SESSION_ID=$(cat ~/.current-ai-session)
MESSAGE=$3
echo "[$(date +%H:%M:%S)] $MESSAGE" >> ~/ai-sessions/$SESSION_ID/log.txt
echo "✅ Logged to session $SESSION_ID"
;;
export)
# Export session notes
SESSION_ID=$(cat ~/.current-ai-session)
cat ~/ai-sessions/$SESSION_ID/log.txt
;;
switch)
# Switch to different tool
./ai-session.sh start $TOOL
;;
esac
```
**Usage:**
```bash
# Start Cursor session
./ai-session.sh start cursor
# Log what you're doing
./ai-session.sh log "Implementing auth with Cursor Composer"
# Switch to Claude for review
./ai-session.sh switch claude
./ai-session.sh log "Reviewing auth implementation"
# Export session timeline
./ai-session.sh export
```
**Output:**
```
[14:30:15] Implementing auth with Cursor Composer
[14:55:22] Reviewing auth implementation
[15:10:45] Fixed security issues found in review
```
#### Script 4: Template Prompt Generator
**Purpose:** Quick access to prompt templates
**File:** `~/scripts/ai-prompt.sh`
```bash
#!/bin/bash
# Generate prompts from templates
TEMPLATE=$1
TEMPLATES_DIR=~/ai-workflows/templates
case $TEMPLATE in
plan)
cat $TEMPLATES_DIR/feature-planning.md | pbcopy
echo "📋 Feature planning template copied"
;;
review)
cat $TEMPLATES_DIR/code-review.md | pbcopy
echo "📋 Code review template copied"
;;
research)
cat $TEMPLATES_DIR/research-query.md | pbcopy
echo "📋 Research template copied"
;;
bug)
cat $TEMPLATES_DIR/bug-debugging.md | pbcopy
echo "📋 Debugging template copied"
;;
list)
echo "Available templates:"
ls -1 $TEMPLATES_DIR | sed 's/.md//'
;;
esac
```
**Usage:**
```bash
./ai-prompt.sh plan # Copy planning template
# Paste into Claude, fill in [PLACEHOLDERS]
./ai-prompt.sh review # Copy review template
# Paste into Claude with code
./ai-prompt.sh list # See all templates
```
---
### ⌨️ Keyboard Shortcuts Setup
#### Alfred/Raycast Workflows (macOS)
**Workflow 1: Quick AI Tool Switcher**
**Trigger:** `ai [tool]`
**Actions:**
- `ai cursor` → Opens Cursor + runs session script
- `ai claude` → Opens Claude + copies context
- `ai chat` → Opens ChatGPT
- `ai plex` → Opens Perplexity
**Implementation (Alfred):**
```bash
# Alfred Script Filter
query=$1
case $query in
cursor) open -a "Cursor" && ./ai-session.sh start cursor ;;
claude) open "https://claude.ai" && ./export-context.sh . ;;
chat) open "https://chat.openai.com" ;;
plex) open "https://perplexity.ai" ;;
esac
```
**Workflow 2: Context Exporter**
**Trigger:** `context`
**Action:** Exports current project context to clipboard
```bash
./export-context.sh $(pwd)
cat ai-context.md | pbcopy
echo "📋 Context copied. Ready for Claude/ChatGPT."
```
**Workflow 3: Template Inserter**
**Trigger:** `prompt [template-name]`
**Actions:**
- `prompt plan` → Copies planning template
- `prompt review` → Copies review template
- `prompt research` → Copies research template
---
### 🔌 Browser Extensions for Workflow
#### Extension 1: AI Context Manager
**Purpose:** Save webpage content directly to AI tools
**Features:**
- Right-click → "Send to Claude"
- Right-click → "Research with Perplexity"
- Save code snippets with context
- Export conversation history
**DIY with Bookmarklets:**
```javascript
// Send selection to Claude
javascript:(function(){
const text = window.getSelection().toString();
const url = window.location.href;
const context = `From: ${url}
${text}`;
navigator.clipboard.writeText(context);
alert('📋 Context copied! Paste into Claude.');
})();
```
**Save as bookmark, click when selecting text**
#### Extension 2: Code Snippet Manager
**Purpose:** Build reusable prompt library
**Recommended:** Espanso (text expander)
**Config:** `~/Library/Application Support/espanso/match/ai.yml`
```yaml
matches:
# Quick Claude prompts
- trigger: ":plan"
replace: "Design architecture for [FEATURE]:
Requirements:
-
Tech Stack:
-
Considerations:
- "
- trigger: ":review"
replace: "Review this code:
[PASTE CODE]
Check for:
- Security
- Performance
- Best practices
- Test coverage"
- trigger: ":research"
replace: "Research best practices for [TOPIC] in 2024:
- Current approaches
- Common pitfalls
- Recommended tools
- Expert opinions"
# Quick file exports
- trigger: ":export"
replace: "{{output}}"
vars:
- name: output
type: shell
params:
cmd: "cat ai-context.md"
```
**Usage:**
- Type `:plan` → Auto-expands to planning template
- Type `:review` → Auto-expands to review template
- Type `:export` → Inserts project context
---
## 6. Měření Efektivity
### 📊 KPI Tracking Framework
#### Metrics to Track
| Metric | Before AI | Target | Measurement Method |
|--------|-----------|--------|-------------------|
| Feature development time | 8 hours | 5 hours | Time tracking per feature |
| Bug fix time | 2 hours | 45 min | Ticket resolution time |
| Code review time | 1 hour | 15 min | PR review duration |
| Documentation time | 30 min | 10 min | Docs commit timestamps |
| Testing coverage | 60% | 85% | Jest coverage reports |
| Production bugs | 5/month | 2/month | Error monitoring |
| Developer satisfaction | 6/10 | 9/10 | Weekly self-assessment |
#### Tracking Template (Weekly)
**File:** `~/ai-workflows/metrics/week-[DATE].md`
```markdown
# AI Workflow Metrics - Week of [DATE]
## Time Savings
| Task | Traditional Time | AI-Assisted Time | Savings | Tool Used |
|------|-----------------|------------------|---------|-----------|
| Auth feature | 8h | 5h | 3h (37%) | Cursor + Claude |
| Payment bug | 2h | 45min | 1h 15min (62%) | Cursor + Perplexity |
| Refactor UserProfile | 6h | 2h | 4h (67%) | Cursor Composer |
| API docs | 45min | 10min | 35min (78%) | ChatGPT |
**Total Savings:** 8h 50min over 20h work week = 44% time saved
## Quality Improvements
- ✅ Zero security issues (Claude review)
- ✅ Test coverage 87% (up from 65%)
- ✅ Zero production bugs
- ✅ All PRs reviewed in <30min
## Tool Usage
- **Cursor:** 15h (code implementation)
- **Claude:** 3h (planning, review)
- **ChatGPT:** 1h (documentation)
- **Perplexity:** 1h (research)
## ROI Calculation
**Cost:** $30/mo
**Time Saved:** ~35h/month (8.75h/week * 4)
**Hourly Rate:** $50 (example)
**Value:** $1,750/month
**ROI:** 5,733% 🚀
## Learnings This Week
- Cursor Composer is amazing for multi-file refactoring
- Claude's architecture planning saves hours of debugging later
- Perplexity is faster than Google for technical research
- ChatGPT docs are good enough for first draft
## Next Week Goals
- Try ensemble method (Claude + ChatGPT architecture)
- Build automated code review pipeline
- Create more prompt templates
```
#### Monthly Dashboard
**Track Long-Term Trends:**
```markdown
# Monthly AI Workflow Performance
## Time Savings Trend
| Month | Hours Saved | Efficiency Gain | Top Tool |
|-------|-------------|-----------------|----------|
| Jan 2024 | 20h | 25% | Cursor |
| Feb 2024 | 35h | 44% | Cursor + Claude |
| Mar 2024 | 45h | 56% | Full workflow |
## Quality Metrics
| Month | Test Coverage | Production Bugs | Code Review Time |
|-------|--------------|-----------------|------------------|
| Jan | 65% | 5 | 60min |
| Feb | 78% | 3 | 30min |
| Mar | 87% | 1 | 15min |
## ROI
- **Total Investment:** $90 (3 months @ $30/mo)
- **Total Time Saved:** 100 hours
- **Total Value:** $5,000 (@ $50/hour)
- **Lifetime ROI:** 5,455%
```
---
## 7. Common Pitfalls & Solutions
### ❌ Pitfall 1: Context Loss When Switching Tools
**Problem:** Lose track of decisions when moving from Claude → Cursor → ChatGPT
**Symptoms:**
- Implementing different architecture than planned
- Forgetting edge cases discussed in research
- Inconsistent coding style
**Solutions:**
✅ **Solution 1: Central Documentation Hub**
- Save ALL AI outputs to Notion/Obsidian
- Reference docs in every prompt
- Example: "Based on architecture doc: [link]"
✅ **Solution 2: Session Logging**
- Use `ai-session.sh` script to log each transition
- Export session timeline at end of feature
- Review timeline before moving to next task
✅ **Solution 3: Explicit Context Handoff**
**Bad Transition:**
```
Claude: [designs auth system]
→ Switch to Cursor
Cursor: "Implement JWT auth" ❌ Missing context
```
**Good Transition:**
```
Claude: [designs auth system, saves to docs/auth-design.md]
→ Switch to Cursor
Cursor: "Implement auth based on: [paste auth-design.md content]" ✅
```
---
### ❌ Pitfall 2: Over-Reliance on Single Tool
**Problem:** Using only Cursor for everything (planning, coding, review, docs)
**Why It's Bad:**
- Cursor is optimized for code generation, not architecture planning
- Claude is better at security review
- ChatGPT is faster for documentation
- Missing tool-specific strengths
**Solutions:**
✅ **Solution: Task-Specific Tool Selection**
**Decision Tree:**
```
Task Type → Best Tool
Planning/Architecture → Claude (reasoning strength)
Research → Perplexity (cited sources)
Code Implementation → Cursor (multi-file, context)
Code Generation (small) → Copilot (speed)
Security Review → Claude (thoroughness)
Documentation → ChatGPT (speed, quality)
Debugging → Cursor Chat (project context)
```
**Track Your Usage:**
```markdown
# This Week's Tool Distribution
❌ BAD: Cursor 95%, Claude 5%, Others 0%
✅ GOOD: Cursor 60%, Claude 25%, ChatGPT 10%, Perplexity 5%
```
---
### ❌ Pitfall 3: Inconsistent Outputs (Different Tools, Different Styles)
**Problem:** Claude suggests one architecture, Cursor implements another
**Example:**
```
Claude: "Use repository pattern with dependency injection"
Cursor: [generates direct database queries in controllers] ❌
```
**Solutions:**
✅ **Solution 1: Explicit Style Guide in Every Prompt**
**Create:** `CODING_STANDARDS.md`
```markdown
# Project Coding Standards
## Architecture Patterns
- Repository pattern for data access
- Dependency injection for services
- API routes in /api directory
- Services in /services directory
## Code Style
- TypeScript strict mode
- ESLint + Prettier
- Functional components (React)
- Async/await (no .then())
## Testing
- Jest + React Testing Library
- 80%+ coverage target
- Test files next to source files
```
**Then in EVERY Cursor prompt:**
```
Implement [feature] following project standards:
[paste CODING_STANDARDS.md]
Ensure:
- Repository pattern
- TypeScript strict
- 80%+ test coverage
```
✅ **Solution 2: Ensemble Verification**
**Process:**
1. Claude designs architecture
2. Cursor implements
3. **Claude reviews implementation** against original design
4. Fix inconsistencies
**Claude Review Prompt:**
```
Compare this implementation against the original design:
Original Design (from earlier conversation):
[paste Claude's architecture doc]
Actual Implementation:
[paste Cursor's code]
Check:
- Does implementation match design?
- Any deviations?
- Are deviations improvements or mistakes?
```
---
### ❌ Pitfall 4: Poor Handoff Management (Information Loss)
**Problem:** Context lost between tools
**Example:**
```
Perplexity: "Best practice is to use httpOnly cookies for JWT"
→ Switch to Cursor
Cursor: [implements localStorage JWT] ❌ Lost research finding
```
**Solutions:**
✅ **Solution: Structured Handoff Format**
**Perplexity Research Output (structured):**
```markdown
# JWT Authentication Research
## Key Findings
### Security Best Practices
1. ✅ Use httpOnly cookies (prevents XSS)
2. ✅ Implement refresh token rotation
3. ✅ Set short access token expiry (15min)
4. ❌ NEVER use localStorage (XSS vulnerable)
### Implementation Requirements
- Cookie: httpOnly, Secure, SameSite=Strict
- Access token: 15min expiry
- Refresh token: 7 day expiry, rotate on use
[Save to: docs/auth-research.md]
```
**Cursor Prompt (references research):**
```
Implement JWT auth following research findings:
[paste auth-research.md]
Ensure:
- httpOnly cookies (NOT localStorage)
- 15min access token expiry
- Refresh token rotation
```
---
### ❌ Pitfall 5: Inefficient Context Sharing (Token Waste)
**Problem:** Pasting entire codebase into Claude/ChatGPT
**Example:**
```
Prompt: "Review my app"
[pastes 50 files, 10,000 lines] ❌
- Hits token limit
- Expensive
- Unfocused review
```
**Solutions:**
✅ **Solution: Targeted Context Sharing**
**Bad Review Request:**
```
Review my entire app:
[50 files pasted]
```
**Good Review Request:**
```
Review JWT auth security:
Key Files (300 lines total):
1. authController.js - Login/register logic
2. authMiddleware.js - Token validation
3. jwtHelper.js - Token generation
Focus on:
- SQL injection risks
- XSS vulnerabilities
- Token security
[paste only these 3 files, ~300 lines]
```
**Result:**
- ❌ Bad: 10,000 tokens, generic review
- ✅ Good: 500 tokens, focused, actionable review
---
## 8. Advanced Patterns
### 🎯 Pattern 1: Ensemble Methods (Combining Outputs)
**When to Use:** Critical decisions, security reviews, complex architecture
**Process:**
#### Step 1: Get Multiple Perspectives
**Prompt Same Question to 2-3 Tools:**
**Claude:**
```
Design authentication system for Next.js app:
- Requirements: [details]
- Security: Enterprise-grade
- Scale: 100k users
```
**ChatGPT:**
```
Design authentication system for Next.js app:
- Requirements: [same details]
- Security: Enterprise-grade
- Scale: 100k users
```
**Perplexity:**
```
What are best practices for Next.js authentication at scale?
- Enterprise security requirements
- 100k users
- Include sources
```
#### Step 2: Compare & Synthesize
**Create Comparison Matrix:**
| Aspect | Claude Recommendation | ChatGPT Recommendation | Perplexity Sources |
|--------|---------------------|----------------------|-------------------|
| Token Storage | httpOnly cookies | httpOnly cookies | ✅ Industry consensus |
| Session Management | Redis | Database | ✅ Redis for scale |
| Refresh Strategy | Rotation | Rotation | ✅ OWASP recommended |
| **Difference** | Recommends Redis | Suggests DB first | - |
**Decision:**
- Agree on: httpOnly cookies, token rotation ✅
- Investigate: Redis vs Database for sessions
- Final choice: Redis (better for 100k scale, backed by sources)
#### Step 3: Validation Review
**Send Final Design Back to Claude:**
```
I synthesized these three recommendations:
[paste comparison matrix]
My final decision:
- httpOnly cookies ✅
- Redis sessions ✅
- Token rotation ✅
Does this design have any critical flaws I missed?
```
**Result:** High-confidence architecture with multiple validations ✅
---
### 🎯 Pattern 2: Verification Workflows (Cross-Checking)
**When to Use:** Security-critical code, production deployments, refactoring
**Multi-Stage Verification:**
#### Stage 1: Implementation (Cursor)
```
Implement payment processing:
- Stripe integration
- Webhook handling
- Idempotency
- Error handling
```
**Output:** Payment processing code
#### Stage 2: Security Review (Claude)
```
Security audit this payment code:
[paste payment processing code]
Check for:
- API key exposure
- Webhook signature verification
- Idempotency key handling
- Error message leakage
- SQL injection
- XSS risks
```
**Output:** Security findings list
#### Stage 3: Cross-Verification (ChatGPT)
```
Second opinion on these security findings:
Original Code:
[paste code]
Claude's Findings:
[paste findings]
Questions:
1. Are these findings valid?
2. Any critical issues Claude missed?
3. Any false positives?
```
**Output:** Validated security findings
#### Stage 4: Fix & Re-Review (Cursor + Claude)
```
Cursor: Implement fixes from validated findings
Claude: Review fixes to confirm issues resolved
```
**Result:** Defense-in-depth security validation ✅
---
### 🎯 Pattern 3: Iterative Refinement Loops
**When to Use:** Complex features, performance optimization, UX improvements
**Loop Structure:**
#### Iteration 1: MVP Implementation
**Cursor:**
```
Implement basic search feature:
- Full-text search
- Display results
- Basic relevance ranking
```
**Output:** Basic search working ✅
#### Iteration 2: Performance Review
**Claude:**
```
Review search performance:
[paste implementation]
Optimize for:
- 10k+ records
- <100ms response time
- Efficient queries
```
**Claude Suggestions:**
- Add database index on search column
- Implement query debouncing
- Add pagination
- Use database FTS instead of LIKE queries
#### Iteration 3: Apply Optimizations
**Cursor:**
```
Apply performance optimizations:
[paste Claude's suggestions]
Implement:
- PostgreSQL full-text search indexes
- Debouncing (300ms)
- Pagination (50 results/page)
```
**Output:** Fast search ✅
#### Iteration 4: UX Review
**ChatGPT:**
```
Review search UX:
Current features:
- Search input with debouncing
- Paginated results
- Relevance ranking
Suggest improvements for:
- User feedback (loading states)
- Error handling
- Empty states
- Keyboard navigation
```
**ChatGPT Suggestions:**
- Loading spinner
- "No results" message
- Highlight search terms in results
- Arrow key navigation
#### Iteration 5: Final Polish
**Cursor:**
```
Add UX improvements:
[paste ChatGPT's suggestions]
```
**Output:** Polished, fast, user-friendly search ✅
**Result:** Iterative refinement from MVP → production-ready feature
---
### 🎯 Pattern 4: Parallel Processing (Simultaneous Tasks)
**When to Use:** Independent features, batch operations, research tasks
**Scenario:** Implement 3 independent features simultaneously
**Parallel Track 1: Authentication**
```
Cursor Session 1: Implement JWT auth
Claude Session 1: Review auth security
Timeline: Hours 0-2
```
**Parallel Track 2: Payment Processing**
```
Cursor Session 2: Integrate Stripe
Perplexity Session 1: Research Stripe best practices
Timeline: Hours 0-2 (simultaneous)
```
**Parallel Track 3: Documentation**
```
ChatGPT Session 1: Generate API docs
Timeline: Hours 0-2 (simultaneous)
```
**Convergence Point (Hour 3):**
```
Claude: Review all 3 features together
- Integration points
- Consistency check
- Combined testing strategy
```
**Result:** 3 features delivered in 3 hours instead of 9 hours sequential ✅
---
### 🎯 Pattern 5: Fallback Strategies (When Tools Fail)
**When to Use:** AI hallucinations, incorrect outputs, context limitations
**Multi-Level Fallback Chain:**
#### Level 1: Primary Tool (Cursor)
```
Task: Implement WebSocket real-time chat
Tool: Cursor Composer
```
**If Cursor generates broken code or hallucinates libraries:**
#### Level 2: Alternative Tool (Claude)
```
Claude: Cursor generated this WebSocket code but it's not working:
[paste Cursor's code]
Error:
[paste error message]
Please debug and provide correct implementation.
```
**If Claude can't solve it:**
#### Level 3: Research + Manual (Perplexity + Docs)
```
Perplexity: How to implement WebSocket chat in Next.js 14?
- Include working code examples
- Common errors and solutions
- Best libraries to use
```
**Then manually implement based on research**
#### Level 4: Community + Docs
```
- Check official Next.js docs
- Search GitHub issues
- Stack Overflow
- Ask in Discord/Slack communities
```
**Result:** Always have a path forward when AI fails ✅
---
## 9. 30-Day Implementation Roadmap
### Week 1: Foundation Setup
**Day 1-2: Tool Setup**
- ✅ Install Cursor Pro ($20/mo)
- ✅ Set up Claude Free account
- ✅ Set up ChatGPT Free account
- ✅ Set up Perplexity Free account
- ✅ Install Alfred/Raycast
- ✅ Install Espanso for text expansion
**Day 3-4: Workflow Scripts**
- ✅ Create `export-context.sh`
- ✅ Create `ai-session.sh`
- ✅ Create `ai-prompt.sh`
- ✅ Set up Alfred workflows
- ✅ Configure Espanso snippets
**Day 5-7: Template Library**
- ✅ Create feature planning template
- ✅ Create code review template
- ✅ Create research template
- ✅ Create debugging template
- ✅ Create `CODING_STANDARDS.md`
**Metrics:** Track time spent on setup tasks (baseline)
---
### Week 2: Basic Workflow Practice
**Day 8-10: Single-Tool Mastery**
- Practice Cursor Composer (implement 2-3 small features)
- Practice Claude planning (design 2-3 features)
- Practice ChatGPT docs (generate docs for existing features)
**Day 11-14: Two-Tool Workflows**
- Claude (plan) → Cursor (implement)
- Perplexity (research) → Cursor (implement)
- Cursor (code) → Claude (review)
**Metrics:**
- Track time per task (compare to historical baseline)
- Note friction points
- Document what works well
---
### Week 3: Multi-Tool Integration
**Day 15-17: Three-Tool Workflows**
- Perplexity (research) → Claude (design) → Cursor (implement)
- Cursor (implement) → Claude (review) → ChatGPT (document)
**Day 18-21: Full Workflow Practice**
- Implement complete feature using all tools
- Track handoffs between tools
- Measure end-to-end time savings
**Metrics:**
- Time savings vs baseline
- Quality improvements (bugs, test coverage)
- Subjective satisfaction rating
---
### Week 4: Optimization & Automation
**Day 22-24: Workflow Refinement**
- Identify bottlenecks
- Optimize slow transitions
- Add more automation scripts
- Refine templates based on usage
**Day 25-28: Advanced Patterns**
- Try ensemble method on critical decision
- Practice verification workflow on security code
- Use iterative refinement on complex feature
**Day 29-30: Measurement & Iteration**
- Calculate Week 4 metrics
- Compare to Week 1 baseline
- Document learnings
- Plan next month improvements
**Expected Results:**
- 30-50% time savings by Week 4
- Higher code quality (fewer bugs)
- Better documentation
- Higher confidence in outputs
---
## 10. Quick Reference Cheat Sheet
### 🎯 Decision Tree: Which Tool When?
```
Task Type → Recommended Tool
Planning/Architecture → Claude (reasoning strength)
Research & Best Practices → Perplexity (cited sources)
Code Implementation → Cursor (multi-file context)
Quick Code Snippets → Copilot (speed)
Security Review → Claude (thoroughness)
Documentation → ChatGPT (speed)
Debugging → Cursor Chat (project context)
Refactoring → Cursor Composer (multi-file)
Test Generation → Cursor + Claude (coverage + edge cases)
Performance Optimization → Claude (analysis) + Cursor (implementation)
```
### ⚡ Quick Commands
```bash
# Export project context
./export-context.sh ~/projects/my-app
# Start new AI session
./ai-session.sh start cursor
# Copy template prompt
./ai-prompt.sh plan
# Switch tools
./ai-session.sh switch claude
```
### 📋 Prompt Templates
**Feature Planning (Claude):**
```
Design [FEATURE] for [PROJECT]:
Requirements:
- [REQ 1]
- [REQ 2]
Tech Stack: [STACK]
Provide:
- Architecture overview
- File structure
- Implementation code
- Dependencies
- Testing strategy
```
**Code Review (Claude):**
```
Review [FILES]:
[PASTE CODE]
Check:
- Security vulnerabilities
- Performance issues
- Best practices
- Test coverage gaps
```
**Research (Perplexity):**
```
Best practices for [TOPIC] in 2024:
- Current approaches
- Common pitfalls
- Recommended tools
- Expert sources
```
---
## Závěr & Next Steps
### ✅ Immediate Actions (This Week)
1. **Set Up Tool Stack:**
- Install Cursor Pro ($20/mo)
- Create Claude Free account
- Set up ChatGPT Free + Perplexity Free
2. **Create Workflow Scripts:**
- Download `export-context.sh`, `ai-session.sh`, `ai-prompt.sh`
- Set up Alfred/Raycast workflows
- Configure Espanso text snippets
3. **First Feature With New Workflow:**
- Pick small feature (2-4 hour task)
- Follow Pattern 1 (Feature Implementation)
- Track time and results
### 📊 Success Metrics to Track
**Weekly Dashboard:**
```markdown
| Metric | Baseline | Week 1 | Week 4 | Target |
|--------|----------|--------|--------|--------|
| Feature Dev Time | 8h | 7h | 5h | 5h |
| Bug Fix Time | 2h | 1.5h | 45min | 45min |
| Code Review Time | 1h | 45min | 15min | 15min |
| Test Coverage | 60% | 70% | 85% | 85% |
| Production Bugs | 5/mo | 4/mo | 2/mo | 2/mo |
| Satisfaction | 6/10 | 7/10 | 9/10 | 9/10 |
```
### 🎯 30-Day Goals
- **Week 1:** Tool setup + basic workflows
- **Week 2:** Two-tool workflow mastery
- **Week 3:** Multi-tool integration practice
- **Week 4:** 40%+ time savings, high-quality outputs
### 🚀 Long-Term Vision
**3 Months:**
- Fully automated workflow
- 50%+ productivity gain
- Custom integrations and scripts
- Team workflow templates
**6 Months:**
- Advanced automation (CI/CD integration)
- Ensemble methods for critical code
- Comprehensive prompt library
- Measurable ROI documentation
---
## 🎁 Bonus: Tool Comparison Matrix
| Feature | Cursor | Claude | ChatGPT | Perplexity | Copilot |
|---------|--------|--------|---------|------------|---------|
| **Code Generation** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ | ⭐ | ⭐⭐⭐⭐⭐ |
| **Multi-File Editing** | ⭐⭐⭐⭐⭐ | ⭐ | ⭐ | ⭐ | ⭐⭐ |
| **Architecture Design** | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ | ⭐ |
| **Security Review** | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | ⭐⭐ |
| **Research** | ⭐ | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐ |
| **Documentation** | ⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
| **Speed** | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| **Context Window** | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| **Cost** | $20/mo | $20/mo | Free | Free | $10/mo |
**Best Combinations:**
1. **Budget: $30/mo**
- Cursor Pro ($20) + Claude Free + Perplexity Free
- Best for: Full-stack development
2. **Budget: $40/mo**
- Cursor Pro ($20) + Claude Pro ($20)
- Best for: Complex architecture + security-critical apps
3. **Budget: $50/mo**
- Cursor Pro ($20) + Claude Pro ($20) + ChatGPT Plus ($20)
- Best for: Maximum productivity across all tasks
**Your Recommended Stack: Cursor Pro + Claude Free ($20/mo total)**
- Covers 90% of your needs
- Leaves $10 in budget for occasional ChatGPT Plus month
- Claude Free's 100 msgs/day enough for 4h/day work
---
**Good luck with your new AI-powered workflow! 🚀 Remember: The goal isn't to use the most expensive tools, but to use the RIGHT tool for each task.** 💡
Kde použít tento prompt?
Najděte vhodné AI nástroje pro použití tohoto promptu a maximalizujte jeho efektivitu.
Podobné prompty
AI Tool Limitations & Workarounds - Překonávání Omezení
Praktický průvodce známými omezeními AI nástrojů (hallucinations, context limits, reasoning failures) a efektivními workaroundy pro produktivní práci i při těchto omezeních.
Asynchronous communication framework
Implementuj async-first komunikační strategie pro remote work a deep work protection.
Automatizace a workflow optimization
Identifikuj repetitivní úkoly a navrhni automatizační workflow pro zvýšení efektivity.
Calendar blocking a time management strategie
Vytvoř time blocking strategii s calendar optimization pro maximální produktivitu.
Objevte další AI prompty
Prozkoumejte naši sbírku Produktivita promptů a najděte ty, které vám pomohou dosáhnout lepších výsledků.