JDoc Reader
Projects ·

Matkalkyl development documents and activity

Here are documents for reference and statistical presentations of development time.

API Docs for projects

The new version of Matkalkyl is based on MK Framework written in JavaScript. Here follows the API documentation for the framework and related sub-projects.

Development time
a

Fredrk is the developer of Matkalkyl and is tracking all daily activities using EntryLog. Here is a statistical live view based on datapoints relating to the development of Matkalkyl.

Many of the diagrams are based on a points model: Spec of calculating points to value impact of time.

Weekly score: last full week (Mon–Sun) → normalized to %
Week (stacked):
...

Stacked 7-day view of recorded development time. Each bar is one day, split by entry (color), so you can see both the daily total and how time was distributed across projects/tasks.

356 days (heatmap, total)

A compact 356-day calendar heatmap of total time per day. Darker cells mean more recorded time relative to the maximum day in the window, making streaks, breaks, and “heavy weeks” instantly visible at a glance.

356 days (monthly sum, line, per datapoint)

Year-scale trend compressed into monthly totals. Each point is one month (sum of all spans in that month), with one line per entry—use it to spot long-term momentum, seasonality, and major shifts in focus.

14 days (daily, line, per datapoint)

Short-term trend over the last 14 days. Each point is a day’s total per entry, making it easy to see recent bursts, gaps, and whether the current week is trending up or down compared to the previous one.

90 days (impact points, stacked, score-weighted; negative = low quality)

A 90-day “net value” view where each day is stacked by entry and measured in points rather than minutes. Points increase with quality and duration (with a cap), but long low-quality sessions accumulate penalty—negative bars signal days where time spent was dominated by low quality and/or overlong sessions.

Quality mix (7-day avg, % of scored minutes, average 7-days rolling window calculating percentage of Quality levels)

Rolling 7-day average of quality composition. For each day, the chart shows the percentage split of scored time across Q1/Q2/Q3 (unscored time is excluded), revealing whether the recent week is drifting toward higher-quality sessions or not.

Quality mix (90d): daily minutes stacked by Q1/Q2/Q3

Daily stacked minutes across 90 days, split into Q1/Q2/Q3. Unlike the percentage view, this shows absolute volume: you can see whether higher quality is increasing because you worked better, worked more, or both.

Top spans (90d): best/worst sessions + biggest penalty

A ranked table of individual sessions (spans) from the last 90 days: best net sessions, worst net sessions, and the biggest penalties. This highlights which specific work blocks drove progress, and which patterns (often long low-quality spans) caused the most damage.

90 days (daily trend): Net vs Impact vs Penalty (lines; penalty is positive magnitude)

Decomposition of the 90-day points model into three daily lines: Impact (positive contribution), Penalty (wasted/overlong low-quality cost), and Net (impact minus penalty). It’s a diagnostic view: rising Impact with rising Penalty suggests “more work” without better structure.

Scatter (90d): minutes vs quality (per span)

Each dot is a single session from the last 90 days. X-axis is session length (minutes) and Y-axis is the quality score level, so clusters show your “typical” session sizes per quality, and outliers reveal long sessions that still stayed high-quality (or failed).

Scatter (90d): minutes vs net points (per span)

Each dot is one scored session: minutes on the X-axis and net points on the Y-axis (impact minus penalty). This makes “productive length” visible—where net peaks—and shows when longer sessions start to turn negative due to penalty.

90 days (efficiency): Net per hour (net points / scored hours)

Daily efficiency over 90 days measured as net points per scored hour. It answers “how good was the time I actually scored?” and helps separate high-output short days from long days with diluted quality.

90 days (histogram): span length distribution (marathon creep / fragmentation)

Distribution of session lengths over 90 days, bucketed into time ranges. Stacking by Q-level (plus unscored) shows whether your time is fragmenting into tiny spans or drifting into marathon blocks—and what quality those blocks tend to have.

Histogram (90d): minutes bins, split by entry (stacked by Q + unscored)

Same idea as the span-length histogram, but split by entry so each entry forms its own stacked group. This reveals which projects produce short focused bursts versus long sessions, and whether those sessions skew toward Q1/Q2/Q3.

356 days (pie, by entry)

Share-of-time breakdown across the full 356-day window. Each slice is an entry’s total seconds, letting you see which areas dominated the year and whether the portfolio of work is balanced or concentrated.