DR error measures: Difference between revisions
| Line 111: | Line 111: | ||
=== Partially DR (one related delta set, arbitrary free deltas) === | === Partially DR (one related delta set, arbitrary free deltas) === | ||
We similarly include a free variable to be optimized for every additional +?, after coalescing strings of consecutive +?'s | We similarly include a free variable y<sub>i</sub> to be optimized for every additional +?, after coalescing strings of consecutive +?'s into segments and after trimming leading and trailing free delta segments. | ||
The L-BFGS-B (limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints) algorithm is a bounded quasi-Newton method particularly well-suited for this problem: | |||
* '''Gradient-based:''' Uses numerical gradient information to find precise search directions, unlike derivative-free methods (Nelder-Mead, Powell) which explore via simplex transformations or conjugate directions | |||
* '''Quasi-Newton approximation:''' Builds up an approximation of the Hessian (second derivative matrix) using only gradient evaluations, avoiding expensive exact Hessian computation | |||
* '''Memory efficient:''' Limited-memory variant stores only the last m gradient differences (typically m=5-10), making it suitable for problems with many free variables | |||
* '''Box constraints:''' Naturally handles the constraint x > 0 via logarithmic barrier penalties, converting the constrained problem to unconstrained | |||
* '''Smooth objective:''' The sum-of-squared-errors objective function is smooth and differentiable, ideal for gradient-based optimization | |||
'''Performance comparison''' (from test cases): | |||
{| class="wikitable" | |||
! Method !! Iterations !! Typical Error !! Notes | |||
|- | |||
| L-BFGS-B || 14-20 || 0.000359-0.000681 || Fastest convergence, lowest errors | |||
|- | |||
| Nelder-Mead || 200-400 || 0.000465-0.000993 || Derivative-free, slower but robust | |||
|- | |||
| Powell || 150-250 || 0.000566-0.000681 || Conjugate directions, moderate speed | |||
|} | |||
'''Implementation details:''' | |||
* Variables: [x, y<sub>1</sub>, y<sub>2</sub>, ..., y<sub>k</sub>] where k = number of interior free segments | |||
* Bounds: x ∈ (10<sup>-6</sup>, ∞), y<sub>i</sub> ∈ (-∞, ∞) | |||
* Normalization: Scales deltas to keep x ≈ 5 for numerical stability | |||
* Multiple starting points: Tries 4-5 random initializations and returns best result | |||
* Barrier weight: 10<sup>-10</sup> (very small to minimize interference with true objective) | |||
The gradient-based approach allows L-BFGS-B to converge in typically 15-20 iterations versus 200-400 for derivative-free methods, making it the preferred choice for PDR optimization. | |||
==== Pseudocode ==== | |||
== External links == | == External links == | ||
* [https://inthar-raven.github.io/delta/ Inthar's DR chord explorer (includes least-squares error calculation in both domains and multiple error models)] | * [https://inthar-raven.github.io/delta/ Inthar's DR chord explorer (includes least-squares error calculation in both domains and multiple error models)] | ||
[[Category:Atypical ratios]] | [[Category:Atypical ratios]] | ||
Revision as of 21:19, 12 December 2025
This article will describe several least-squares error measures for delta-rational chords. They have the advantage of not fixing a particular interval in the chord when constructing the chord of best fit. However, like any other numerical measure of concordance or error, you should take them with a grain of salt.
Conventions and introduction
The idea motivating least-squares error measures on a chord as an approximation to a given delta signature is the following (for simplicity, let’s talk about the fully DR case first):
We want the error of a chord 1:r1:r2:...:rn (in increasing order), with n > 1, in the linear domain as an approximation to a fully delta-rational chord with signature +δ1 +δ2 ... +δn, i.e. a chord
with root real-valued harmonic x. Let be the delta signature +δ1 +δ2 ... +δn written cumulatively.
We want to measure the error without having to fix any dyad (as one might naively fix a dyad and measure errors in the other deltas). To do this we solve a least-squares error problem: use a root-sum-square error function and optimize x to minimize that function.
Domain and error model
We have two choices:
- to measure either the linear (frequency ratio) error or the logarithmic (cents) one (called the domain);
- the collection of intervals to sum over (which we call the error model):
- Rooted: Only intervals from the root real-valued harmonic x are chosen.
- Pairwise: All intervals in the chord are compared.
- All-steps: Only intervals between adjacent notes are compared.
The method to solve the problem will also differ depending on the numbers of variables involved (only one variable x for fully delta-rational).
We arrive at the following general formula: Let let be the error model, and let represent the domain function (identity or ). Then the error function to be minimized by optimizing and any free deltas is:
| Domain | Error model | Error function |
|---|---|---|
| Linear | Rooted | |
| Pairwise | ||
| All-steps | ||
| Logarithmic (nepers) |
Rooted | |
| Pairwise | ||
| All-steps |
To convert nepers to cents, multiply by
Solution methods
Analytic (FDR case)
Rooted linear
Setting the derivative to 0 gives us the closed-form solution
which can be plugged back into
to obtain the least-squares linear error.
Grid method (FDR case)
Partially DR (one related delta set, one free variable)
Suppose we wish to approximate a target delta signature of the form with the chord (where the +? is free to vary). By a derivation similar to the above, the least-squares problem is
where y represents the free delta +?.
We can set the partial derivatives with respect to x and y of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system:
import sympy
x = sympy.Symbol("x", real=True)
y = sympy.Symbol("y", real=True)
d1 = sympy.Symbol("\\delta_{1}", real=True)
d2 = sympy.Symbol("\\delta_{2}", real=True)
d3 = sympy.Symbol("\\delta_{3}", real=True)
r1 = sympy.Symbol("r_1", real=True)
r2 = sympy.Symbol("r_2", real=True)
r3 = sympy.Symbol("r_3", real=True)
err_squared = ((x + d1) / x - r1) ** 2 + ((x + d1 + y) / x - r2) ** 2 + ((x + d1 + y + d3) / x - r3) ** 2
err_squared.expand()
err_squared_x = sympy.diff(err_squared, x)
err_squared_y = sympy.diff(err_squared, y)
sympy.nonlinsolve([err_squared_x, err_squared_y], [x, y])The unique solution with x > 0 is
Partially DR (one related delta set, arbitrary free deltas)
We similarly include a free variable yi to be optimized for every additional +?, after coalescing strings of consecutive +?'s into segments and after trimming leading and trailing free delta segments.
The L-BFGS-B (limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints) algorithm is a bounded quasi-Newton method particularly well-suited for this problem:
- Gradient-based: Uses numerical gradient information to find precise search directions, unlike derivative-free methods (Nelder-Mead, Powell) which explore via simplex transformations or conjugate directions
- Quasi-Newton approximation: Builds up an approximation of the Hessian (second derivative matrix) using only gradient evaluations, avoiding expensive exact Hessian computation
- Memory efficient: Limited-memory variant stores only the last m gradient differences (typically m=5-10), making it suitable for problems with many free variables
- Box constraints: Naturally handles the constraint x > 0 via logarithmic barrier penalties, converting the constrained problem to unconstrained
- Smooth objective: The sum-of-squared-errors objective function is smooth and differentiable, ideal for gradient-based optimization
Performance comparison (from test cases):
| Method | Iterations | Typical Error | Notes |
|---|---|---|---|
| L-BFGS-B | 14-20 | 0.000359-0.000681 | Fastest convergence, lowest errors |
| Nelder-Mead | 200-400 | 0.000465-0.000993 | Derivative-free, slower but robust |
| Powell | 150-250 | 0.000566-0.000681 | Conjugate directions, moderate speed |
Implementation details:
- Variables: [x, y1, y2, ..., yk] where k = number of interior free segments
- Bounds: x ∈ (10-6, ∞), yi ∈ (-∞, ∞)
- Normalization: Scales deltas to keep x ≈ 5 for numerical stability
- Multiple starting points: Tries 4-5 random initializations and returns best result
- Barrier weight: 10-10 (very small to minimize interference with true objective)
The gradient-based approach allows L-BFGS-B to converge in typically 15-20 iterations versus 200-400 for derivative-free methods, making it the preferred choice for PDR optimization.
