Produktivita Expert Claude

Decision journal a critical thinking framework

Vytvoř decision journal systém pro trackování rozhodnutí a zlepšování critical thinking.

Prompt text

Délka: Dlouhý
Navrhni decision journal a critical thinking framework pro [ROLE/TYP ROZHODNUTÍ]. Frequency: [DAILY/WEEKLY/MAJOR DECISIONS ONLY]. Struktura: 1) **Decision Journal Basics** - purpose (improve decision quality over time, reduce cognitive biases, learn from outcomes), what to track (major decisions only vs all decisions, threshold: > 2h thinking or > $500 impact or affects others), format (Notion database, Obsidian notes, spreadsheet), frequency (log decision within 24h of making it, review monthly), 2) **Decision Entry Template** - decision summary (one sentence: "Should I hire candidate X for senior role?"), date decided, category (career, product, hiring, investment, personal), context (what led to this decision, time pressure, information available, constraints, stakeholders), options considered (minimum 3 alternatives: A) Hire candidate X, B) Keep looking, C) Promote internally, list pros/cons for each), decision criteria (what factors matter: technical skills 40%, culture fit 30%, experience 20%, cost 10%), final decision (which option chosen, reasoning why, confidence level 1-10, expected outcome), 3) **Pre-Decision Analysis** - problem framing (what am I really trying to solve? am I solving right problem?), assumptions (what am I assuming is true? what if I'm wrong?), information gathering (what data do I have? what's missing? where can I get it?), bias check (am I anchored to first option? confirmation bias? sunk cost fallacy? availability bias?), second-order thinking (and then what? consequences of consequences? long-term effects?), 4) **Decision Making Frameworks** - Expected Value calculation (Option A: 70% success × $100k gain - 30% fail × $20k loss = $64k EV), Regret Minimization (will I regret NOT doing this in 10 years?), Reversibility Check (type 1 irreversible vs type 2 reversible decision → if reversible, decide fast), 10-10-10 Rule (how will I feel 10min, 10 months, 10 years from now?), Eisenhower Matrix (urgent vs important), Pre-Mortem (imagine failed, what went wrong?), 5) **Bias Detection** - common biases checklist (Confirmation bias: seeking evidence that confirms preference, Sunk cost fallacy: continuing because already invested, Anchoring: over-relying on first information, Availability bias: using recent/memorable examples, Overconfidence: thinking you know more than you do, Groupthink: conforming to team opinion without critical eval), debiasing tactics (consider opposite, devil's advocate, seek disconfirming evidence, consult outsider view, wait 24h before deciding), 6) **Collaborative Decisions** - when to involve others (high impact, affects team, lack expertise, complex problem), decision making roles (driver who makes call, approver who has veto, contributors give input, informed need to know), facilitation (structure discussion, capture options, force ranking, vote if needed), disagreement protocol (strong opinions loosely held, disagree and commit, document dissent), 7) **Post-Decision Review** - when to review (30 days, 90 days, 6 months after decision based on time horizon), what to assess (did outcome match expectation? was decision quality good regardless of outcome? what was luck vs skill? what would I do differently?), update entry (add review section, note outcome, record lessons, calculate prediction accuracy), learning extraction (patterns across decisions, repeated mistakes, biases you fall for, what improves decisions), 8) **Decision Quality vs Outcome** - separate process from result (good decision + bad luck = bad outcome still good decision, bad decision + good luck = good outcome still bad decision), focus on improving process (did I gather right info? consider enough options? think clearly? reduce bias?), probabilistic thinking (decisions with 70% success will fail 30% of time = normal, judge over time not single instance), 9) **Metrics and Tracking** - log decision count (how many major decisions made per month), decision speed (fast for reversible, slow for irreversible), confidence calibration (when you say 80% confident, is outcome right 80% of time? track actual %), decision accuracy by category (better at product decisions than hiring?), 10) **Review Routine** - monthly review (read all decisions from month, identify patterns, update outcomes for older decisions, extract lessons), quarterly analysis (decision accuracy by category, bias patterns, what types of decisions you excel at vs struggle with, adjust framework), annual summary (year in review of major decisions, biggest lessons learned, decision quality improvement over time). Včetně: Template Notion database structure, example decision entries with full pre-mortem and post-review, bias checklist, framework comparison (when to use which), real examples, troubleshooting tips.
Interaktivní tvůrce

Příklad použití

Vstup:

Role: Product manager at SaaS startup. Decision type: Product strategy, feature prioritization, hiring. Goal: Make better decisions, reduce bias, learn faster.

Výstup:

[17,000+ word decision journal system with Notion database structure (fields: Decision Summary text, Date Decided date, Category select product/hiring/strategy, Context long-text, Options Considered multi-line with bullets, Decision Criteria weighted list, Final Decision text, Confidence 1-10, Expected Outcome text, Review Date, Actual Outcome, Lessons Learned, Accuracy % calculated), real decision entry example "Should we build AI feature or improve core product?" logged Mar 1 2025 (Context: Users requesting AI capabilities, competitors launching AI, but core product has bugs and churn is 5%, 2 weeks to decide for Q2 roadmap, pressure from CEO to add AI, team split 50/50; Options: A) Build AI feature - pros: competitive parity, marketing buzz, potential new revenue, cons: 3 months dev time, team distraction from core, may not reduce churn, risky unproven tech; B) Fix core product - pros: reduce churn, improve satisfaction, less risky, cons: not exciting, competitors pull ahead, harder to sell; C) Do both (split team) - pros: cover both, cons: split focus, longer timelines, neither done well; Decision Criteria: Impact on churn 40%, Competitive position 25%, Revenue potential 20%, Team capability 15%; Pre-Decision Analysis: Problem reframing - am I solving "how to compete" or "how to retain customers"? real problem is churn not AI, Assumptions check - assuming AI will attract users but no evidence, assuming core bugs causing churn which is true based on support tickets, Bias detection - anchoring on AI hype, confirmation bias reading only positive AI articles, FOMO about competitors; Second-order thinking - if build AI: need ongoing maintenance forever, requires ML talent hire, may promise features we can't deliver → risk reputation, if fix core: churn drops → revenue increases → can afford AI later + stronger foundation; Decision Frameworks - Pre-mortem for AI: failed because rushed unpolished AI, users still churned from core bugs, team burned out; Expected Value: AI option 30% success ($200k rev) + 70% fail ($0) - $150k cost = -$90k EV, Core fix 80% success (churn 5%→3% saves $300k) + 20% same ($0) - $50k cost = +$190k EV; Regret minimization: in 10 years won't regret being 6 months behind on AI, will regret losing customers; Final Decision: Option B - Fix core product, Confidence: 8/10, Expected Outcome: Churn drops from 5% to 3% in 90 days, NPS improves from 35 to 45, can revisit AI in Q3 from stronger foundation), 90-day review added June 1 2025 (Actual Outcome: Churn dropped to 3.2% = close to prediction ✓, NPS improved to 43 = close ✓, saved $280k from reduced churn, team morale improved from shipping fixes, competitor AI feature had bugs and poor reviews = dodged bullet, Lessons: Trust data over hype, fix foundation before adding features, pre-mortem was spot-on about AI risks, Expected Value framework worked well, Decision Quality: 9/10 - good process, good outcome, Accuracy: 95% outcome matched expectation), bias patterns identified across 15 decisions in 3 months (confirmed bias towards shiny new features in 60% of product decisions, overconfidence in estimates - predicted "2 weeks" actually took 3-4 weeks in 70% of cases, anchoring on competitor moves rather than customer needs in 40% of strategy decisions), monthly review routine (first Friday of month, review all decisions from previous month, spent 1h analyzing 5 major decisions, noted churn-focus decisions had 85% accuracy vs new-feature decisions only 60% accuracy = insight to prioritize retention over growth features), decision quality improvements tracked (Month 1: 60% accuracy, Month 2: 70%, Month 3: 75% = learning working), templates and frameworks (Notion database with rollup formulas calculating average confidence vs actual accuracy showing calibration 72% = okay need improvement, bias checklist used before every decision, Expected Value calculator built in Notion)]

Kde použít tento prompt?

Najděte vhodné AI nástroje pro použití tohoto promptu a maximalizujte jeho efektivitu.

Objevte další AI prompty

Prozkoumejte naši sbírku Produktivita promptů a najděte ty, které vám pomohou dosáhnout lepších výsledků.