DR error measures: Difference between revisions

From Xenharmonic Reference
Line 58: Line 58:


== Solution methods ==
== Solution methods ==
=== Analytic (FDR case) ===
=== Analytic ===
==== Rooted linear ====
==== Fully DR, rooted linear ====
Setting the derivative to 0 gives us the closed-form solution
Setting the derivative to 0 gives us the closed-form solution


Line 70: Line 70:
to obtain the least-squares linear error.
to obtain the least-squares linear error.


=== Grid method (FDR case) ===
==== Partially DR (one related delta set, one free variable), rooted linear ====
 
=== Partially DR (one related delta set, one free variable), rooted linear ===
Suppose we wish to approximate a target delta signature of the form <math>+\delta_1 +? +\delta_3</math> with the chord <math>1:r_1:r_2:r_3</math> (where the +? is free to vary). The least-squares problem is
Suppose we wish to approximate a target delta signature of the form <math>+\delta_1 +? +\delta_3</math> with the chord <math>1:r_1:r_2:r_3</math> (where the +? is free to vary). The least-squares problem is


Line 81: Line 79:
where ''y'' represents the free delta +?.
where ''y'' represents the free delta +?.


We can set the partial derivatives with respect to ''x'' and ''y'' of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system:
We can set the partial derivatives with respect to ''x'' and ''y'' of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system symbolically:


<syntaxhighlight lang="py">
<syntaxhighlight lang="py">
Line 109: Line 107:
y = \frac{- 2 \delta_1^2 r_1 + \delta_1^2 r_2 + \delta_1^2 r_3 - \delta_1 \delta_3 r_1 + \delta_1 \delta_3 r_2 - \delta_1 \delta_3 r_3 + \delta_1 \delta_3 + \delta_3^2 r_2 - \delta_3^2}{2 \delta_1 r_1 - 2 \delta_1 - \delta_3 r_2 + \delta_3 r_3}.
y = \frac{- 2 \delta_1^2 r_1 + \delta_1^2 r_2 + \delta_1^2 r_3 - \delta_1 \delta_3 r_1 + \delta_1 \delta_3 r_2 - \delta_1 \delta_3 r_3 + \delta_1 \delta_3 + \delta_3^2 r_2 - \delta_3^2}{2 \delta_1 r_1 - 2 \delta_1 - \delta_3 r_2 + \delta_3 r_3}.
</math>
</math>
=== Grid method (FDR case) ===


=== Partially DR (one related delta set, arbitrary free deltas) ===
=== Partially DR (one related delta set, arbitrary free deltas) ===

Revision as of 01:48, 13 December 2025

This is a technical or mathematical page. While the subject may be of some relevance to music, the page treats the subject in technical language.

This article will describe several least-squares error measures for delta-rational chords. They have the advantage of not fixing a particular interval in the chord when constructing the chord of best fit. However, like any other numerical measure of concordance or error, you should take them with a grain of salt.

Conventions and introduction

The idea motivating least-squares error measures on a chord as an approximation to a given delta signature is the following (for simplicity, let’s talk about the fully DR case first):

We want the error of a chord 1:r1:r2:...:rn (in increasing order), with n > 1, in the linear domain as an approximation to a fully delta-rational chord with signature +δ12 ... +δn, i.e. a chord

x:x+δ1::x+l=1nδl

with root real-valued harmonic x. Let D0=0,Di=k=1iδk be the delta signature +δ12 ... +δn written cumulatively.

We want to measure the error without having to fix any dyad (as one might naively fix a dyad and measure errors in the other deltas). To do this we solve a least-squares error problem: use a root-sum-square error function and optimize x to minimize that function.

Domain and error model

We have two choices:

  • to measure either the linear (frequency ratio) error or the logarithmic (cents) one (called the domain);
  • the collection of intervals to sum over (which we call the error model):
    • Rooted: Only intervals from the root real-valued harmonic x are chosen.
    • Pairwise: All intervals in the chord are compared.
    • All-steps: Only intervals between adjacent notes are compared.

The method to solve the problem will also differ depending on the numbers of variables involved (only one variable x for fully delta-rational).

We arrive at the following general formula: Let [n]={0,1,2,...,n}, let I([n]2) be the error model, and let fD represent the domain function (identity or log). Then the error function to be minimized by optimizing x and any free deltas is:

i<j,{i,j}I(fD(x+Djx+Di)fD(rjri))2.

Error function for various modes and error models
Domain Error model Error function
Linear Rooted i=1n(x+Dixri)2=i=1n(1+Dixri)2
Pairwise 0i<jn(x+Djx+Dirjri)2
All-steps 0i<n(x+Di+1x+Diri+1ri)2
Logarithmic
(nepers)
Rooted i=1n(logx+Dirix)2
Pairwise 0i<jn(logx+Djx+Dilogrjri)2
All-steps 0i<n(logx+Di+1x+Dilogri+1ri)2

To convert nepers to cents, multiply by 1200log2.

Solution methods

Analytic

Fully DR, rooted linear

Setting the derivative to 0 gives us the closed-form solution

x=i=1nDin+i=1nri,

which can be plugged back into

1=1n(1+Dixri)2

to obtain the least-squares linear error.

Suppose we wish to approximate a target delta signature of the form +δ1+?+δ3 with the chord 1:r1:r2:r3 (where the +? is free to vary). The least-squares problem is

minimizex,y(x+δ1xr1)2+(x+δ1+yxr2)2+(x+δ1+y+δ3xr3)2,

where y represents the free delta +?.

We can set the partial derivatives with respect to x and y of the inner expression equal to zero (since the derivative of sqrt() is never 0) and use SymPy to solve the system symbolically:

import sympy
x = sympy.Symbol("x", real=True)
y = sympy.Symbol("y", real=True)
d1 = sympy.Symbol("\\delta_{1}", real=True)
d2 = sympy.Symbol("\\delta_{2}", real=True)
d3 = sympy.Symbol("\\delta_{3}", real=True)
r1 = sympy.Symbol("r_1", real=True)
r2 = sympy.Symbol("r_2", real=True)
r3 = sympy.Symbol("r_3", real=True)
err_squared = ((x + d1) / x - r1) ** 2 + ((x + d1 + y) / x - r2) ** 2 + ((x + d1 + y + d3) / x - r3) ** 2
err_squared.expand()
err_squared_x = sympy.diff(err_squared, x)
err_squared_y = sympy.diff(err_squared, y)
sympy.nonlinsolve([err_squared_x, err_squared_y], [x, y])

The unique solution with x > 0 is

x=2δ1+δ3+2yr2+r32,

y=2δ12r1+δ12r2+δ12r3δ1δ3r1+δ1δ3r2δ1δ3r3+δ1δ3+δ32r2δ322δ1r12δ1δ3r2+δ3r3.

Grid method (FDR case)

We similarly include a free variable yi to be optimized for every additional +?, after coalescing strings of consecutive +?'s into segments and after trimming leading and trailing free delta segments.

Pseudocode