Matkalkyl development documents and activity
Here are documents for reference and statistical presentations of development time.
The new version of Matkalkyl is based on MK Framework written in JavaScript. Here follows the API documentation for the framework and related sub-projects.
Fredrk is the developer of Matkalkyl and is tracking all daily activities using EntryLog. Here is a statistical live view based on datapoints relating to the development of Matkalkyl.
Many of the diagrams are based on a points model: Spec of calculating points to value impact of time.
Stacked 7-day view of recorded development time. Each bar is one day, split by entry (color), so you can see both the daily total and how time was distributed across projects/tasks.
A compact 356-day calendar heatmap of total time per day. Darker cells mean more recorded time relative to the maximum day in the window, making streaks, breaks, and “heavy weeks” instantly visible at a glance.
Year-scale trend compressed into monthly totals. Each point is one month (sum of all spans in that month), with one line per entry—use it to spot long-term momentum, seasonality, and major shifts in focus.
Short-term trend over the last 14 days. Each point is a day’s total per entry, making it easy to see recent bursts, gaps, and whether the current week is trending up or down compared to the previous one.
A 90-day “net value” view where each day is stacked by entry and measured in points rather than minutes. Points increase with quality and duration (with a cap), but long low-quality sessions accumulate penalty—negative bars signal days where time spent was dominated by low quality and/or overlong sessions.
Rolling 7-day average of quality composition. For each day, the chart shows the percentage split of scored time across Q1/Q2/Q3 (unscored time is excluded), revealing whether the recent week is drifting toward higher-quality sessions or not.
Daily stacked minutes across 90 days, split into Q1/Q2/Q3. Unlike the percentage view, this shows absolute volume: you can see whether higher quality is increasing because you worked better, worked more, or both.
A ranked table of individual sessions (spans) from the last 90 days: best net sessions, worst net sessions, and the biggest penalties. This highlights which specific work blocks drove progress, and which patterns (often long low-quality spans) caused the most damage.
Decomposition of the 90-day points model into three daily lines: Impact (positive contribution), Penalty (wasted/overlong low-quality cost), and Net (impact minus penalty). It’s a diagnostic view: rising Impact with rising Penalty suggests “more work” without better structure.
Each dot is a single session from the last 90 days. X-axis is session length (minutes) and Y-axis is the quality score level, so clusters show your “typical” session sizes per quality, and outliers reveal long sessions that still stayed high-quality (or failed).
Each dot is one scored session: minutes on the X-axis and net points on the Y-axis (impact minus penalty). This makes “productive length” visible—where net peaks—and shows when longer sessions start to turn negative due to penalty.
Daily efficiency over 90 days measured as net points per scored hour. It answers “how good was the time I actually scored?” and helps separate high-output short days from long days with diluted quality.
Distribution of session lengths over 90 days, bucketed into time ranges. Stacking by Q-level (plus unscored) shows whether your time is fragmenting into tiny spans or drifting into marathon blocks—and what quality those blocks tend to have.
Same idea as the span-length histogram, but split by entry so each entry forms its own stacked group. This reveals which projects produce short focused bursts versus long sessions, and whether those sessions skew toward Q1/Q2/Q3.
Share-of-time breakdown across the full 356-day window. Each slice is an entry’s total seconds, letting you see which areas dominated the year and whether the portfolio of work is balanced or concentrated.