Skip to content

Dimensionals

Started: Feb 4, 2026 | Completed: Feb 6, 2026 | Status: done

Remaining Work

  • dimensionals blink a lot
  • hide occluded dimensions !!!!!!!
  • tab between dimensionals
  • arrow-key nudging (±unit of precision)

Done

  • witness lines should align with world axes
    • they likely should NOT be drawn as visually parallel lines
      • visually parallel lines look crooked and intuitively wrong
  • dimensionals sometimes do not extend outward from SO
  • show dimensions -> intersection lines turn black and get thicker
  • add button "show/hide dimensionals" a boolean that render reacts to
  • add angulars (like dimenionals but for angles)

Goal

Render 3 dimensionals. Then make them editable — changing a dimension value updates the geometry.

A dimensional is a decoration attached to an edge, containing: terminator arrows, dimension text, dimension lines, witness lines.

Geometry SOT (source of truth) is SO (three axes, each has size and position relative to origin). O_Scene accesses the SOT and renders in reaction. User interacts with the 2D projection displayed on the camera.

Each edge is parallel to three others around the SO. Algorithm A selects one as the attached edge. Each edge belongs to two faces. Algorithm B selects which face's plane to draw the witness lines in. When the chosen edge is pointing towards the camera, the text gets occluded by the witness and dimension lines, Algorithm C detects this and either inverts the dimension lines and arrows or hides the entire dimensional.

Algorithm A — edge selection

Silhouette edge detection. A silhouette edge has one front-facing and one back-facing adjacent face. For each axis, there are always exactly 2 silhouette edges; prefer the one whose front-facing adjacent face is most toward the viewer (most negative winding).

  • must be on the circumference — not the edge that is in front
  • there will always be two such, prefer the front
  • if the projected witness lines are too close for the text, hide the dimensional

Algorithm B — witness plane

Pick the witness direction most perpendicular to the edge on screen. Project each candidate axis (the two perpendicular to the edge axis) to screen, compute cross product with the edge's screen direction. The candidate with the largest cross product magnitude wins.

  • optimizes directly for visual spread via screen-space perpendicularity
  • handles edge cases where face normal · camera gives poor results
  • witness lines are perspective-correct (per-vertex 3D projection, not shared screen-space offset). dimension line, witness gap, and extension are all computed as 3D offsets along witness_dir, then projected — so witness lines diverge naturally while the dimension line stays parallel to the measured edge

Algorithm C — crunch detection

When the chosen edge is pointing towards the camera, the text gets occluded by the witness and dimension lines, detect this and either invert the dimension lines and arrows or hide the entire dimensional.

The "gap" needed for text is not the text's pixel width — it's the distance the dimension line travels across the text's bounding box:

gap = textWidth * |ux| + textHeight * |uy| + padding

Where (ux, uy) is the unit vector along the dimension line.

  • compute gap using projected text bounding box onto dimension line direction
  • if lineLen >= gap + arrows: normal layout (text centered, arrows at ends pointing in)
  • if lineLen >= gap but < gap + arrows: inverted layout (arrows outside, pointing in toward text)
  • if lineLen < gap: hide the dimensional (including witness lines)

Work Phases

  • 1 draw the dimensionals, algorithms A and B
  • 2 stretch the SO → dimensionals react
  • 3 edit the text → SO reacts

Implement

  • dimension data structure (derived on render, no persistent state)
  • witness lines (3D-projected, perpendicular to measured edge)
  • dimension line (parallel, between witnesses, with text gap)
  • terminator arrows
  • dimension text (value centered on dimension line)
  • Algorithm A — silhouette edge detection
  • Algorithm B — witness plane via screen-space perpendicularity
  • Algorithm C — crunch detection with projected text gap
  • edit interaction (click text, type value, apply)

Test manually

  • shows 3 dimensions (one per axis), always
  • dimensions update when geometry dragged
  • edit dimension value — geometry resizes accordingly

Artifacts

  • R_Dimensions.ts — extracted from Render.ts. Contains render_dimensions, render_axis_dimension, find_best_edge_for_axis, edge_witness_direction, draw_dimension_3d

Key Decisions

  • dimensions are derived on render (no persistent data structure) — YAGNI
  • Algorithm A uses silhouette edges, not front-face + distance-from-center heuristic
  • Algorithm B uses screen-space perpendicularity (cross product), not face normal · camera or projected length
  • Algorithm C uses projected text bounding box for gap calculation, not raw text width
  • witness lines are perspective-correct: each vertex's witness direction is computed by projecting vertex + witness_dir independently, so witness lines converge toward vanishing points instead of being forced parallel. dimension line stays parallel to edge because all offsets use the same 3D translation (witness_dir * d), then project
  • repeater clones skip dimensionals except: first fireblock gets repeat-axis dimensional; last fireblock gets repeat-axis dimensional too if its length differs from the first (shortened bookend bay)