DR error measures: Difference between revisions

From Xenharmonic Reference
Line 111: Line 111:


=== Partially DR (one related delta set, arbitrary free deltas) ===
=== Partially DR (one related delta set, arbitrary free deltas) ===
We similarly include a free variable to be optimized for every additional +?, after coalescing strings of consecutive +?'s and omitting the middle notes, and after trimming leading and trailing +?'s.
We similarly include a free variable y<sub>i</sub> to be optimized for every additional +?, after coalescing strings of consecutive +?'s into segments and after trimming leading and trailing free delta segments.


Todo: The L-BFGS-B algorithm is suited for five-variable (base real-valued harmonic + four free deltas; a realistic upper bound on real-world use cases of partial DR) optimization problems with bounds, so let's talk about that
The L-BFGS-B (limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints) algorithm is a bounded quasi-Newton method particularly well-suited for this problem:
 
* '''Gradient-based:''' Uses numerical gradient information to find precise search directions, unlike derivative-free methods (Nelder-Mead, Powell) which explore via simplex transformations or conjugate directions
* '''Quasi-Newton approximation:''' Builds up an approximation of the Hessian (second derivative matrix) using only gradient evaluations, avoiding expensive exact Hessian computation
* '''Memory efficient:''' Limited-memory variant stores only the last m gradient differences (typically m=5-10), making it suitable for problems with many free variables
* '''Box constraints:''' Naturally handles the constraint x > 0 via logarithmic barrier penalties, converting the constrained problem to unconstrained
* '''Smooth objective:''' The sum-of-squared-errors objective function is smooth and differentiable, ideal for gradient-based optimization
 
'''Performance comparison''' (from test cases):
{| class="wikitable"
! Method !! Iterations !! Typical Error !! Notes
|-
| L-BFGS-B || 14-20 || 0.000359-0.000681 || Fastest convergence, lowest errors
|-
| Nelder-Mead || 200-400 || 0.000465-0.000993 || Derivative-free, slower but robust
|-
| Powell || 150-250 || 0.000566-0.000681 || Conjugate directions, moderate speed
|}
 
'''Implementation details:'''
* Variables: [x, y<sub>1</sub>, y<sub>2</sub>, ..., y<sub>k</sub>] where k = number of interior free segments
* Bounds: x ∈ (10<sup>-6</sup>, ∞), y<sub>i</sub> ∈ (-∞, ∞)
* Normalization: Scales deltas to keep x ≈ 5 for numerical stability
* Multiple starting points: Tries 4-5 random initializations and returns best result
* Barrier weight: 10<sup>-10</sup> (very small to minimize interference with true objective)
 
The gradient-based approach allows L-BFGS-B to converge in typically 15-20 iterations versus 200-400 for derivative-free methods, making it the preferred choice for PDR optimization.
 
==== Pseudocode ====


== External links ==
== External links ==
* [https://inthar-raven.github.io/delta/ Inthar's DR chord explorer (includes least-squares error calculation in both domains and multiple error models)]
* [https://inthar-raven.github.io/delta/ Inthar's DR chord explorer (includes least-squares error calculation in both domains and multiple error models)]
[[Category:Atypical ratios]]
[[Category:Atypical ratios]]

Revision as of 21:19, 12 December 2025

This is a technical or mathematical page. While the subject may be of some relevance to music, the page treats the subject in technical language.

This article will describe several least-squares error measures for delta-rational chords. They have the advantage of not fixing a particular interval in the chord when constructing the chord of best fit. However, like any other numerical measure of concordance or error, you should take them with a grain of salt.

Conventions and introduction

The idea motivating least-squares error measures on a chord as an approximation to a given delta signature is the following (for simplicity, let’s talk about the fully DR case first):

We want the error of a chord 1:r1:r2:...:rn (in increasing order), with n > 1, in the linear domain as an approximation to a fully delta-rational chord with signature +δ12 ... +δn, i.e. a chord

x:x+δ1::x+l=1nδl

with root real-valued harmonic x. Let D0=0,Di=k=1iδk be the delta signature +δ12 ... +δn written cumulatively.

We want to measure the error without having to fix any dyad (as one might naively fix a dyad and measure errors in the other deltas). To do this we solve a least-squares error problem: use a root-sum-square error function and optimize x to minimize that function.

Domain and error model

We have two choices:

  • to measure either the linear (frequency ratio) error or the logarithmic (cents) one (called the domain);
  • the collection of intervals to sum over (which we call the error model):
    • Rooted: Only intervals from the root real-valued harmonic x are chosen.
    • Pairwise: All intervals in the chord are compared.
    • All-steps: Only intervals between adjacent notes are compared.

The method to solve the problem will also differ depending on the numbers of variables involved (only one variable x for fully delta-rational).

We arrive at the following general formula: Let [n]={0,1,2,...,n}, let I([n]2) be the error model, and let fD represent the domain function (identity or log). Then the error function to be minimized by optimizing x and any free deltas is:

i<j,{i,j}I(fD(x+Djx+Di)fD(rjri))2.

Error function for various modes and error models
Domain Error model Error function
Linear Rooted i=1n(x+Dixri)2=i=1n(1+Dixri)2
Pairwise 0i<jn(x+Djx+Dirjri)2
All-steps 0i<n(x+Di+1x+Diri+1ri)2
Logarithmic
(nepers)
Rooted i=1n(logx+Dirix)2
Pairwise 0i<jn(logx+Djx+Dilogrjri)2
All-steps 0i<n(logx+Di+1x+Dilogri+1ri)2

To convert nepers to cents, multiply by 1200log2.

Solution methods

Analytic (FDR case)

Rooted linear

Setting the derivative to 0 gives us the closed-form solution

x=i=1nDin+i=1nri,

which can be plugged back into

1=1n(1+Dixri)2

to obtain the least-squares linear error.

Grid method (FDR case)

Suppose we wish to approximate a target delta signature of the form +δ1+?+δ3 with the chord 1:r1:r2:r3 (where the +? is free to vary). By a derivation similar to the above, the least-squares problem is

minimizex,y(x+δ1xr1)2+(x+δ1+yxr2)2+(x+δ1+y+δ3xr3)2,

where y represents the free delta +?.

We can set the partial derivatives with respect to x and y of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system:

import sympy
x = sympy.Symbol("x", real=True)
y = sympy.Symbol("y", real=True)
d1 = sympy.Symbol("\\delta_{1}", real=True)
d2 = sympy.Symbol("\\delta_{2}", real=True)
d3 = sympy.Symbol("\\delta_{3}", real=True)
r1 = sympy.Symbol("r_1", real=True)
r2 = sympy.Symbol("r_2", real=True)
r3 = sympy.Symbol("r_3", real=True)
err_squared = ((x + d1) / x - r1) ** 2 + ((x + d1 + y) / x - r2) ** 2 + ((x + d1 + y + d3) / x - r3) ** 2
err_squared.expand()
err_squared_x = sympy.diff(err_squared, x)
err_squared_y = sympy.diff(err_squared, y)
sympy.nonlinsolve([err_squared_x, err_squared_y], [x, y])

The unique solution with x > 0 is

x=2δ1+δ3+2yr2+r32,

y=2δ12r1+δ12r2+δ12r3δ1δ3r1+δ1δ3r2δ1δ3r3+δ1δ3+δ32r2δ322δ1r12δ1δ3r2+δ3r3.

We similarly include a free variable yi to be optimized for every additional +?, after coalescing strings of consecutive +?'s into segments and after trimming leading and trailing free delta segments.

The L-BFGS-B (limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints) algorithm is a bounded quasi-Newton method particularly well-suited for this problem:

  • Gradient-based: Uses numerical gradient information to find precise search directions, unlike derivative-free methods (Nelder-Mead, Powell) which explore via simplex transformations or conjugate directions
  • Quasi-Newton approximation: Builds up an approximation of the Hessian (second derivative matrix) using only gradient evaluations, avoiding expensive exact Hessian computation
  • Memory efficient: Limited-memory variant stores only the last m gradient differences (typically m=5-10), making it suitable for problems with many free variables
  • Box constraints: Naturally handles the constraint x > 0 via logarithmic barrier penalties, converting the constrained problem to unconstrained
  • Smooth objective: The sum-of-squared-errors objective function is smooth and differentiable, ideal for gradient-based optimization

Performance comparison (from test cases):

Method Iterations Typical Error Notes
L-BFGS-B 14-20 0.000359-0.000681 Fastest convergence, lowest errors
Nelder-Mead 200-400 0.000465-0.000993 Derivative-free, slower but robust
Powell 150-250 0.000566-0.000681 Conjugate directions, moderate speed

Implementation details:

  • Variables: [x, y1, y2, ..., yk] where k = number of interior free segments
  • Bounds: x ∈ (10-6, ∞), yi ∈ (-∞, ∞)
  • Normalization: Scales deltas to keep x ≈ 5 for numerical stability
  • Multiple starting points: Tries 4-5 random initializations and returns best result
  • Barrier weight: 10-10 (very small to minimize interference with true objective)

The gradient-based approach allows L-BFGS-B to converge in typically 15-20 iterations versus 200-400 for derivative-free methods, making it the preferred choice for PDR optimization.

Pseudocode