Claude Code Insights

41,173 messages across 7354 sessions | 2025-12-23 to 2026-02-05

At a Glance
What's working: You've built an impressive portfolio-management workflow where Claude audits and fixes issues across 14+ interconnected websites, combining browser inspection with code fixes and immediate deployment. Your overnight runner pattern — delegating autonomous work to Claude and reviewing results the next day — is a genuinely productive way to multiply your output across a large number of projects. Impressive Things You Did →
What's hindering you: On Claude's side, it repeatedly reaches for naive approaches (like raw DOM manipulation on ProseMirror editors) instead of targeting the correct framework APIs, leading to long trial-and-error cycles that eat up your sessions. On your side, environment instability — API key issues, Windows path resolution failures, and browser extension disconnections — is silently derailing sessions before they gain momentum, and Claude often doesn't have enough upfront context about your target platforms to pick the right approach on the first try. Where Things Go Wrong →
Quick wins to try: Try creating Custom Skills (slash commands) that encode your most common workflows — like a `/site-audit` skill that includes your audit checklist and editor-specific constraints so Claude doesn't waste time on wrong approaches. Also consider using Hooks to auto-validate your environment (auth tokens, tool paths, browser extension connectivity) at session start, so you stop losing sessions to setup failures. Features to Try →
Ambitious workflows: As models get more capable, prepare to run parallel agents that each audit and fix a different site simultaneously, compressing your multi-day audit cycles into hours with findings auto-merged into PROJECTS.md. Your overnight runner pattern is also ripe for evolution — imagine agents that checkpoint progress, recover from failures mid-task, and leave you a structured morning briefing with diffs and test results ready for review. On the Horizon →
41,173
Messages
+5,905,768/-431,936
Lines
33628
Files
42
Days
980.3
Msgs/Day

What You Work On

Website Auditing & Broken Site Remediation ~2 sessions
Comprehensive auditing of 14+ Teneo auth-linked sites and diagnosing/fixing broken websites. Claude used browser tools (mcp__claude-in-chrome__computer) extensively to identify issues like overly restrictive Content-Security-Policy headers blocking inline styles and fonts, then applied fixes and documented findings in PROJECTS.md.
Browser Automation & Content Injection Pipeline ~2 sessions
Building a markdown-to-Skool HTML pipeline and automating browser-based content injection into Skool's ProseMirror/TipTap editor. While the build pipeline was completed successfully in TypeScript, the live browser injection was plagued by editor save issues, extension disconnections, and ProseMirror not recognizing programmatic content changes — a major friction area.
Build Pipeline & Tooling Infrastructure ~1 sessions
Developing and maintaining build pipeline systems, likely for content transformation and deployment workflows. Claude used heavy Bash and multi-file editing to construct these pipelines, encountering path resolution issues with tsx on Windows that required debugging.
Feature Implementation & Delegation System ~1 sessions
Implementing new features including a delegation system, with Claude working through remaining unfinished tasks from prior sessions. This involved substantial multi-file TypeScript and Python changes, leveraging Claude's ability to coordinate edits across codebases.
Deployment, CI/CD & Overnight Runners ~2 sessions
Managing deployment workflows, commit-and-push operations, and overnight autonomous runners across multiple projects. Claude helped identify which projects needed overnight runs, executed them, checked results, fixed hallucination bugs autonomously, and handled git operations for deployment.
What You Wanted
Website Audit
108
Debugging Broken Sites
108
Build Pipeline System
99
Browser Automation Injection
99
Feature Implementation
26
Commit And Push
22
Top Tools Used
Bash
121539
Read
57886
Mcp Claude-In-Chrome Computer
55937
Edit
42266
Write
22569
Glob
16498
Languages
Markdown
52526
TypeScript
28995
Python
17954
YAML
6541
JSON
5437
JavaScript
1174
Session Types
Multi Task
209
Single Task
60

How You Use Claude Code

You operate Claude Code at an extraordinary scale, with over 7,300 sessions and 53,000+ hours of compute time across just six weeks — this is clearly a heavily automated, multi-agent operation rather than a single human sitting at a keyboard. Your dominant pattern is delegating large autonomous workloads to Claude: website audits across 14 sites, overnight runners, build pipelines, and browser automation injection tasks. You tend to kick off ambitious, multi-step objectives and let Claude run with significant autonomy, intervening primarily when things go wrong or when you need to redirect approach. The massive 121,000+ Bash tool invocations and 55,000+ browser computer-use calls confirm you're pushing Claude into heavy execution mode rather than using it as a conversational advisor.

Your interaction style reveals a high tolerance for iteration but low tolerance for wrong approaches — the 405 instances of "wrong_approach" friction and 108 user-rejected actions show you actively course-correct when Claude heads down the wrong path. You don't over-specify upfront; instead, you launch broad goals like "audit all 14 auth-linked sites" or "build a markdown-to-Skool pipeline with browser injection" and expect Claude to figure out the details. When it works, it works beautifully — 233 successful multi-file changes and a pattern of commit-and-push requests show you trust Claude to make sweeping codebase modifications. But when browser automation hits friction (ProseMirror save issues, extension disconnections, wrong tab targeting), you'll let Claude struggle through multiple attempts before stepping in. Your primary languages are TypeScript and Markdown, with the Markdown dominance (52,000+ lines) suggesting heavy documentation generation, likely tied to your website audit and content pipeline workflows.

Notably, your sessions tend to be continuations of prior work — you pick up where previous sessions left off, ask Claude to finish remaining tasks, or check results from overnight runs. This chain-of-sessions approach, combined with heavy TodoWrite usage (15,000+ invocations), indicates you're managing Claude like a persistent worker with a task backlog rather than engaging in one-off conversations. Your 4,391 commits across this period — roughly 100 per day — underscore that you're using Claude as a full-time autonomous development and operations engine.

Key pattern: You run Claude Code as a massively parallelized autonomous agent fleet, delegating broad multi-site audits and complex build pipelines while actively rejecting wrong approaches but otherwise letting Claude execute with minimal hand-holding.
User Response Time Distribution
2-10s
957
10-30s
1926
30s-1m
3293
1-2m
3405
2-5m
4761
5-15m
5497
>15m
4691
Median: 212.5s • Average: 532.5s
Multi-Clauding (Parallel Sessions)
1783
Overlap Events
451
Sessions Involved
11%
Of Messages

You run multiple Claude Code sessions simultaneously. Multi-clauding is detected when sessions overlap in time, suggesting parallel workflows.

User Messages by Time of Day
Morning (6-12)
8697
Afternoon (12-18)
14470
Evening (18-24)
15704
Night (0-6)
2302
Tool Errors Encountered
Command Failed
13977
Other
5801
File Not Found
826
File Too Large
516
File Changed
452
Edit Failed
250

Impressive Things You Did

You're running Claude Code at an extraordinary scale — over 7,000 sessions and 4,000+ commits — primarily orchestrating website audits, browser automation, and build pipelines across a large portfolio of TypeScript and Python projects.

Portfolio-Wide Site Auditing
You've built a systematic workflow for auditing 14+ interconnected websites, using Claude to diagnose issues like broken CSPs, identify which sites need attention, and document findings in PROJECTS.md. This portfolio-management approach lets you maintain a large number of properties efficiently rather than treating each site in isolation.
Browser-Integrated Debugging Pipeline
You're combining Claude's code editing with the chrome computer-use MCP tool at massive scale — nearly 56,000 browser automation invocations — to diagnose live site issues directly in the browser. Your workflow of identifying a broken site, having Claude inspect it via browser tools, fixing the underlying code, and then committing and deploying is a remarkably tight feedback loop.
Autonomous Overnight Task Runners
You've developed a pattern of delegating long-running tasks to Claude as overnight runners, then reviewing results the next day. This approach — asking which project most needs attention, kicking off autonomous work, and reviewing recommendations — shows you're effectively multiplying your productivity by letting Claude work asynchronously on your behalf.
What Helped Most (Claude's Capabilities)
Multi-file Changes
233
Good Debugging
29
Good Explanations
2
Outcomes
Not Achieved
5
Partially Achieved
99
Mostly Achieved
110
Fully Achieved
29
Unclear
26

Where Things Go Wrong

Your workflow is heavily impacted by brittle browser automation, code that doesn't account for real-world editor frameworks, and environment instability that derails sessions entirely.

Browser Automation Fragility
Your heavy reliance on the Chrome computer tool (55,937 calls) leads to frequent breakdowns when targeting tabs, handling permissions, or maintaining extension connections. You could reduce friction by pre-configuring stable browser contexts, using direct API calls where possible instead of UI automation, and building retry/fallback logic into your automation scripts.
  • Browser automation kept targeting the wrong tabs, forcing a fallback to WebFetch and wasting iteration time — and you denied browser tool permissions at one point, suggesting trust in the tool had eroded
  • Skool content injection failed repeatedly due to browser extension disconnections and misclicks on the save button, leaving the entire injection workflow incomplete despite the pipeline itself being built successfully
Incorrect Approach to Rich Text Editors
When injecting content into web-based editors like ProseMirror and TipTap, Claude repeatedly tried naive DOM manipulation (innerHTML, setContent) that didn't trigger the editors' internal save mechanisms. You should prompt Claude with context about the specific editor framework in use so it can target the correct API or event dispatch from the start, rather than trial-and-error through multiple failed approaches.
  • Setting innerHTML on a ProseMirror editor didn't trigger its transaction-based save system, burning cycles on an approach that was never going to work with that framework
  • Switching to TipTap's setContent still didn't auto-save, and then attempting to click the save button programmatically was error-prone — a chain of three wrong approaches before the session ran out of steam
Environment and Session Instability
API key issues, path resolution failures on Windows, and truncated sessions are silently killing your productivity. You could mitigate this by ensuring environment prerequisites (auth tokens, tool paths, platform-specific configs) are validated at session start, and by using CLAUDE.md instructions to front-load environment checks before diving into task work.
  • An API key issue forced a re-login that effectively killed an entire session — Claude only managed to begin searching files before the session ended, achieving nothing on the /fathers page goal
  • The tsx build pipeline hit Windows-specific path resolution issues that had to be debugged mid-flow, fragmenting focus away from the actual content pipeline you were trying to build
Primary Friction Types
Wrong Approach
405
Buggy Code
198
User Rejected Action
108
Tool Infrastructure Issue
5
Inferred Satisfaction (model-estimated)
Likely Satisfied
339

Existing CC Features to Try

Suggested CLAUDE.md Additions

Just copy this into Claude Code to add it to your CLAUDE.md.

Multiple sessions show friction from targeting wrong tabs, extension disconnections, and repeated failed browser interactions that wasted significant time.
Multiple sessions end with the user requesting commit/push/deploy as a follow-up — this is a consistent workflow finale that should be proactive.
TypeScript and Python dominate the codebase across all sessions; establishing this default prevents Claude from choosing wrong languages or skipping type safety.
A full session was spent debugging why injected content didn't persist in Skool's editor because innerHTML doesn't trigger framework-level save handlers.
The multi-site audit sessions were most successful when Claude documented findings in PROJECTS.md first, providing a structured approach to the 14-site audit workflow.

Just copy this into Claude Code and it'll set it up for you.

Custom Skills
Reusable prompt workflows triggered by a single /command.
Why for you: You repeatedly do commit-push-deploy cycles and multi-site audits. A /ship skill and /audit skill would eliminate the repetitive end-of-session requests and standardize your 14-site audit workflow.
mkdir -p .claude/skills/ship && cat > .claude/skills/ship/SKILL.md << 'EOF' # Ship Skill 1. Stage all changes: `git add -A` 2. Generate a descriptive commit message from the diff 3. Commit and push: `git commit -m "<message>" && git push` 4. Run any deploy command found in package.json scripts or Makefile 5. Verify deployment succeeded EOF
Hooks
Auto-run shell commands at lifecycle events like before/after edits.
Why for you: 198 instances of buggy code friction suggest code isn't being validated before you see it. Auto-running type checks (tsc --noEmit, mypy) after edits would catch errors before they compound across multi-file changes — your most successful pattern (233 instances).
// Add to .claude/settings.json: { "hooks": { "postEdit": { "*.ts": "npx tsc --noEmit --pretty 2>&1 | head -20", "*.tsx": "npx tsc --noEmit --pretty 2>&1 | head -20", "*.py": "python -m py_compile $FILE" } } }
Headless Mode
Run Claude non-interactively from scripts and CI/CD.
Why for you: You already asked Claude which project needs an overnight runner — you can automate nightly audits of your 14 Teneo sites and have results waiting for you each morning, eliminating manual audit kickoff sessions.
# overnight-audit.sh - run via cron at 2am #!/bin/bash for site in site1 site2 site3; do claude -p "Audit $site: check if it loads, CSP is valid, auth works. Write results to audit-results/$site-$(date +%Y%m%d).md" \ --allowedTools "Bash,Read,Write,WebFetch" \ >> overnight-audit.log 2>&1 done

New Ways to Use Claude Code

Just copy this into Claude Code and it'll walk you through it.

Wrong-approach friction is your #1 bottleneck
405 wrong-approach instances dwarfs all other friction. Front-load constraints and approach preferences before Claude starts working.
With 405 wrong-approach events across sessions, Claude is frequently going down paths you have to correct. This is especially costly in your browser automation and build pipeline work where a wrong approach (e.g., innerHTML vs editor API) can waste an entire session. Adding explicit approach constraints to CLAUDE.md and starting sessions with a brief 'approach first, then implement' instruction would dramatically reduce this.
Paste into Claude Code:
Before implementing anything, outline your approach in 3-5 bullet points. Wait for my approval before writing any code. If you're unsure between approaches, present the tradeoffs.
Bridge the gap from 'mostly achieved' to 'fully achieved'
110 mostly-achieved vs only 29 fully-achieved suggests sessions stall at the last mile. Use TodoWrite checkpoints to ensure completion.
You're already using TodoWrite heavily (15,267 invocations), which is great. But the high mostly-achieved rate suggests tasks are being tracked but not all checked off before sessions end. This is likely because complex sessions like the 14-site audit or the Skool pipeline have long tails. Consider asking Claude to do a final 'completion check' before ending any session.
Paste into Claude Code:
Before we wrap up, review your todo list. For any incomplete items, either finish them now or document exactly what's left and the next steps in a TODO.md file.
Reduce user_rejected_action events with guardrails
108 rejected actions means Claude is frequently attempting things you don't want. Add explicit permission boundaries.
The browser automation sessions are a major source of rejected actions — you denied browser tool permissions at one point, and Claude kept targeting wrong tabs. Combined with the 405 wrong-approach events, Claude is being too aggressive in execution. Setting explicit boundaries in CLAUDE.md about what requires confirmation (browser actions, destructive operations, deploys) would prevent wasted cycles and build trust in autonomous operation for safe actions.
Paste into Claude Code:
Always ask before: 1) executing browser actions on production sites, 2) deploying anything, 3) modifying auth/CSP configurations. You may proceed without asking for: reading files, running tests, local builds, git status checks.

On the Horizon

Your data reveals a power user pushing Claude Code toward full autonomous workflows—browser automation, multi-site audits, and build pipelines—with massive scale (53K+ hours, 7K+ sessions) but significant friction from wrong approaches and buggy code that autonomous patterns can dramatically reduce.

Parallel Multi-Site Audit and Repair Agents
Instead of auditing 14 sites sequentially with one Claude session, you could spawn parallel Claude Code agents—each responsible for auditing, diagnosing, and fixing a single site autonomously. With your audit + debugging pattern already accounting for 216 of your top goals, parallelizing this could compress multi-day audit cycles into hours, with each agent writing its findings to a shared PROJECTS.md and opening PRs for broken sites automatically.
Getting started: Use Claude Code's headless mode with `claude -p` to launch multiple agents in parallel via a shell script or task runner, each scoped to one site. Combine with your existing MCP browser tools and TodoWrite for structured progress tracking.
Paste into Claude Code:
You are an autonomous site auditor. Your target site is: [SITE_URL]. Do the following without asking for confirmation: 1. Use browser tools to load the site and check for visual rendering issues, broken assets, console errors, and CSP violations. 2. Read the site's deployment config and source files to identify the root cause of any issues found. 3. Write a detailed diagnostic section to `AUDIT_RESULTS/[site-name].md` with: status (healthy/broken/degraded), issues found, root causes, and proposed fixes. 4. If the site is broken and you can identify a clear fix (e.g., CSP headers, missing assets, config errors), implement the fix directly, run any available tests, and commit with a descriptive message prefixed with 'fix(audit):'. 5. If you cannot fix it confidently, document the blocker in the markdown file and move on. Do not stop to ask questions. Use TodoWrite to track your progress through each step.
Self-Healing Browser Injection with Test Loops
Your Skool content injection pipeline failed repeatedly because ProseMirror/TipTap save mechanics weren't detected upfront, and the agent kept trying wrong approaches (405 friction events from wrong_approach). An autonomous agent that first reverse-engineers the target editor's save mechanism, builds a test harness, and then iterates injection attempts against that harness could eliminate the trial-and-error loop entirely. The agent would treat each injection strategy as a hypothesis, test it, and pivot automatically.
Getting started: Structure the workflow as a two-phase Claude Code task: Phase 1 analyzes the target page's editor framework and save triggers using browser DevTools inspection, then writes a verification script. Phase 2 iterates injection approaches, running the verification script after each attempt until content persists.
Paste into Claude Code:
I need to inject formatted HTML content into a web-based rich text editor (like Skool's post editor which uses ProseMirror/TipTap). Before attempting any injection, follow this autonomous workflow: ## Phase 1: Reconnaissance - Use browser tools to inspect the target editor element. Identify the exact framework (ProseMirror, TipTap, Draft.js, Slate, etc.) by checking `document.querySelector('.ProseMirror')`, window objects, and bundle source. - Determine how the editor detects changes: InputEvent dispatch, transaction listeners, MutationObserver, or explicit save API. - Identify the save mechanism: auto-save interval, explicit save button, form submission. Locate the exact save trigger. - Write your findings to `injection-plan.md`. ## Phase 2: Build Verification - Write a browser-executable verification script (`verify-injection.js`) that: (a) checks if the editor contains the expected content, (b) checks if the editor's internal state matches the DOM, (c) triggers a save and confirms persistence. ## Phase 3: Iterative Injection - Try injection strategies in this order, running verify-injection.js after each: 1. Framework's programmatic API (e.g., `editor.commands.setContent()` for TipTap) 2. Clipboard paste simulation with proper MIME types 3. DOM manipulation + synthetic InputEvent dispatch 4. Direct transaction/state manipulation - After each attempt, run verification. If it fails, log why in injection-plan.md and try the next strategy. - Once verified, trigger the save mechanism and confirm the content persists after page reload. Do not repeat a failed strategy. Pivot immediately based on verification results.
Overnight Autonomous Pipeline with Checkpoint Recovery
You're already running overnight agents and asking Claude to check results the next morning—but with 198 buggy code friction events and sessions truncating mid-task, you need agents that can checkpoint their progress, recover from failures, and leave you a comprehensive morning briefing. Imagine kicking off a fleet of agents at night that each tackle a project from your pipeline backlog, write tests as they go, iterate until tests pass, and produce a structured status report with diffs ready for your review.
Getting started: Use `claude -p` in headless mode combined with TodoWrite for checkpointing and a wrapper script that restarts the agent on failure. Have the agent write progress to a structured JSON file that a morning summary agent can consume and synthesize.
Paste into Claude Code:
You are an autonomous overnight development agent. Your task: [DESCRIBE TASK]. Follow this protocol strictly: ## Setup - Read the current project state and any existing TODO files. - Write your execution plan to `overnight-run/plan.md` with numbered steps. - Create `overnight-run/status.json` with: {"step": 0, "total_steps": N, "started": timestamp, "errors": [], "completed": []}. ## Execution Loop For each step in your plan: 1. Update status.json with the current step number before starting. 2. Write or modify code as needed. 3. Write a test for the change if one doesn't exist. 4. Run the test suite (`npm test` or `pytest`). If tests fail, iterate up to 3 times to fix. If still failing after 3 attempts, revert your changes for this step, log the error in status.json, and move to the next step. 5. On success, commit with message format: `overnight([step]): description`. 6. Update status.json with the completed step. ## Morning Briefing When all steps are attempted, write `overnight-run/morning-briefing.md` containing: - Summary: X of Y steps completed, Z commits made - For each step: status (done/failed/skipped), what was changed, test results - Blockers encountered and recommended next actions - `git log --oneline` of all overnight commits - Any code that needs human review flagged with reasoning Never stop to ask questions. If you're unsure, make your best judgment, document your reasoning, and continue.
"Claude built an entire Markdown-to-HTML pipeline only to be defeated by a finicky rich text editor's save button"
A user wanted to automate posting content to Skool by building a Markdown pipeline and injecting it via browser automation. The pipeline worked perfectly, but when Claude tried to paste content into Skool's ProseMirror/TipTap editor, nothing would stick — innerHTML didn't trigger saves, setContent didn't auto-save, the save button kept dodging clicks, and the browser extension kept disconnecting. A classic case of winning the war but losing the battle to a single UI button.