DR error measures: Difference between revisions

From Xenharmonic Reference
Line 70: Line 70:
The error function to be minimized is
The error function to be minimized is


<math>1200 \sqrt{\sum_{i=1}^n \Bigg(\log_2 \frac{x + D_i}{f_i x} \Bigg)}.</math>
<math>1200 \sqrt{\sum_{i=1}^n \Bigg(\log_2 \frac{x + D_i}{f_i x} \Bigg)^2}.</math>


== Floating-root linear error ==
== Floating-root linear error ==

Revision as of 06:04, 12 December 2025

This is a technical or mathematical page. While the subject may be of some relevance to music, the page treats the subject in technical language.

This article will describe several least-squares error measures for delta-rational chords. They have the advantage of not fixing a particular interval in the chord when constructing the chord of best fit. However, like any other numerical measure of concordance or error, you should take them with a grain of salt.

Fixed-root linear error

Fixed-root linear error (here linear means "in frequency space, not pitch space") measures error by optimizing how well cumulative intervals from the root real-valued harmonic match the target chord's DR signature.

Fully DR

The idea motivating least-squares linear error on a chord as an approximation to a given delta signature is the following (for simplicity, let’s talk about the fully DR case first):

Say we want the error of a chord 1:r1:r2:...:rn (in increasing order), with n > 1, in the linear domain as an approximation to a fully delta-rational chord with signature +δ12 ... +δn, i.e. a chord

x:x+δ1::x+l=1nδl.

We wish to minimize the following frequency-domain error function by optimizing x:

i=1n(x+Dixfi)2=i=1n(1+Dixfi)2

Setting the derivative to 0 gives us the closed-form solution

x=i=1nDin+i=1nfi,

which can be plugged back into

1=1n(1+Dixfi)2

to obtain the least-squares linear error.

Suppose we wish to approximate a target delta signature of the form +δ1+?+δ3 with the chord 1:f1:f2:f3 (where the +? is free to vary). By a derivation similar to the above, the least-squares problem is

minimizex,y(x+δ1xf1)2+(x+δ1+yxf2)2+(x+δ1+y+δ3xf3)2,

where y represents the free delta +?.

We can set the partial derivatives with respect to x and y of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system:

import sympy
x = sympy.Symbol("x", real=True)
y = sympy.Symbol("y", real=True)
d1 = sympy.Symbol("\\delta_{1}", real=True)
d2 = sympy.Symbol("\\delta_{2}", real=True)
d3 = sympy.Symbol("\\delta_{3}", real=True)
f1 = sympy.Symbol("f_1", real=True)
f2 = sympy.Symbol("f_2", real=True)
f3 = sympy.Symbol("f_3", real=True)
err_squared = ((x + d1) / x - f1) ** 2 + ((x + d1 + y) / x - f2) ** 2 + ((x + d1 + y + d3) / x - f3) ** 2
err_squared.expand()
err_squared_x = sympy.diff(err_squared, x)
err_squared_y = sympy.diff(err_squared, y)
sympy.nonlinsolve([err_squared_x, err_squared_y], [x, y])

The unique solution with x > 0 is (x,y)=(2δ1+δ3+2(2δ12f1+δ12f2+δ12f3δ1δ3f1+δ1δ3f2δ1δ3f3+δ1δ3+δ32f2δ32)2δ1f12δ1δ3f2+δ3f3f2+f32, 2δ12f1+δ12f2+δ12f3δ1δ3f1+δ1δ3f2δ1δ3f3+δ1δ3+δ32f2δ322δ1f12δ1δ3f2+δ3f3).

We similarly include a free variable to be optimized for every additional +?, after coalescing strings of consecutive +?'s and omitting the middle notes, and after trimming leading and trailing +?'s.

Todo: The L-BFGS-B algorithm is suited for five-variable (base real-valued harmonic + four free deltas; a realistic upper bound on real-world use cases of partial DR) optimization problems with bounds, so let's talk about that

Fixed-root logarithmic error

Fully DR

The error function to be minimized is

1200i=1n(log2x+Difix)2.

Floating-root linear error

Floating-root logarithmic error