• Back to MAIN


  • K4/Q displays from various fields of math, finance and physics

    1. a complete rest-system modulo 17 consisting of multiples of 3

    /Q
    N:17;f:{x-N*y};r:neg i:til 1+N;
    {t@p:first where 0=(t:{f . x} each x,/:r) mod 3} each i

    2. the secant method:

    /k4
    *{2#(((x0*y1)-x1*y0)%(y1:f@x1:x 0)-y0:f(x0:x 1)),x}/init

    3. highest common factor:

    /k4
    g:{$[0=c:x-y*_ x%y;y;g[y;c]]}

    4. Newton method for single polynomial roots:

    /k4
    {:c::c-pv[p]%pv[dp]}/()
    (.. where c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

    5. Muller's method for polynomials ..

    /k4
    pv:{{y+c*x}/[*x;1_x]}
    qe:{D:(b*b:x@1)-4f*(a:x@0)*x@2;((-b+s),-b-s:sqrt D)%2f*a}

    p:1 0 -3 1f;/polynomial
    i:0 0.5 1;/3 starting points needed

    /here 4 iterations ..
    do[4;
    ys:{c::x;:pv[p]}'i;
    y0:ys@1;y1:ys@2;y2:ys@0;
    a:((y1*h2)+(y2*h1)-y0*h1+h2)%h1*h2*(h1:(x1:i 2)-x0)+h2:(x0:i 1)-x2:i 0;
    sx:*x0+t@&u=&/u:{x|-x}'t:qe@a,((y1-y0+a*h1*h1)%h1),y0;
    i:((-9)+6*+/sx
    ]

    6. Fixed point iteration

    /sample 1
    th:1f%3f;g:{(1f+x) xexp th}
    k)g/ 1f .. 1.324718
    iteration solving f(x)=x3+x+1=0
    ..where g fulfills the conditions of Banach's fixed point theorem on [1,2]

    /sample 2
    g:{y*(2f+x*-6f+x*11f+3f*x) xexp 0.25}
    k){g[x;-1f]}/ -2.2 .. -2.496744 resp {g[x;1f]}/ 3 .. 4.982284
    iteration solving f(x)=x4-3x3-11x2+6x-2=0

    Friendly hint: the iteration function must not be necessarily the same for all roots !!

    7. Chinese remainder (not finished yet ..)

    /solving in Q ... x ~ ai (mod mi)
    f:{n@first where 0=((neg y)+x*n:til z) mod z}
    crt:{sum y*t*f ./:flip (t:floor (prd x)%/:x;(count x)#1;x)}

    0 1 2 3 + crt[i;0,1_(i:r*r)-til 4] /.. 4 consecutive not square-free numbers

    Some history:

    The Chinese remainder is mentioned in the 3rd part of the book "Sun Tzu Suan Ching"
    written by the Chinese mathematician Sun Zi (Master Sun). Unfortunately, little is
    known about Sun Zi as he did not have any high political position nor an influential
    social standing and so he did not obtain the highly deserved place in the official
    Chinese history. "Sun Tzu Suan Ching" was not written before 280 AD resp later than
    473 AD - but Sun's name went forgotten already in the early period of the 7th century.
    There is some evidence that Sun Zi was a Buddhist scholar. It was not uncommon for
    Buddhist monks to be well versed in mathematics.

    8. Newton method for multiple polynomial roots:

    /k4
    {:c::c-m*pv[p]%pv[dp]}/()
    (.. where m=multitude,c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

    Friendly hint:iteration function g is converging quadratically, where g'(xfix)=0 and
    g''(xfix)<>0 and g''(x) continuous "near" to xfix.

    Let us look at: f(x)=27x6-135x5+225x4-98x3-135x2+225x-125=0

    With m=1 (starting point = 4) we would get the expected sluggish progress
    3.505972,3.100675,2.770442,2.50393,2.291592,2.125182,1.997326,1.90124,1.830651,1.779899
    1.74409,1.719211,1.702133,1.690512,1.682655,1.677367,1.673819,1.671443,1.669855,1.668794,..

    With m=3 we get almost lightspeed:
    2.517915,1.872123,1.684531,1.666822,1.666667 DONE

    The correct value is 5/3.

    9. Newton method for multiple dimensions ..

    /k4
    li:{-(!:+-1_u)$+-1#u:+x}
    {x+,/li@m .\:\:x}/ startvalue

    m is the function-matrix (in case of polynomials we can represent the partial
    derivatives as translations of hyperplanes in an n-dimensional space)

    let's solve:

    xx+y2=3.950607
    x2+yy=3.340242

    {x+,/li@m .\:\:x}/ (2f;6f) .. returns ~ 0.77 1.77 ;where m:(fx,fy,f;gx,gy,g) and

    f:{(x xexp x)+(y*y)-3.950607}
    fx:{[x;y](1f+log x)*x xexp x}
    fy:{[x;y]2f*y}

    g:{(y xexp y)+(x*x)-3.340242}
    gx:{fx[y;x]}
    gy:{fy[y;x]}

    10. fixed point method for n dimensions:

    x3-y(sin y)+yz2=-7
    xy-yz+z1/3=-0.2427912
    x(sin y) + y(sin x) + xz2=-16.14828

    then {(f1,f2,f3).\:x}/ (-3 1.5 2.5f) will converge to:
    -2.105851 0.3408871 2.682297, where

    f1:{(-7f+(y*sin y)-y*z*z) xexp 1%3}
    f2:{(0.2427912+z xexp 1%3)%z-x}
    f3:{[x;y;z]sqrt ((neg 16.14828)-(x*sin y)+y*sin x)%x}

    where at least one B-norm must be < 1 (ie |B|,|B|1 .. etc) over a suitable n-dimensional cube around the startvalue

    11. system of linear equations - iterative Jacobi method

    /k4
    {m$x,1f}/ startvalue /we assume m is strictly diagonally row(column) dominant


    12. system of linear equations - iterative Gauss-Seidel method

    /k4
    {i:0;c:();do[r:#x;c,:+/(c,0f,((1+i-r)#x),1f)*m@i;i+:1];:c}/ startvalue
    /we assume m is strictly diagonally row(column) dominant

    13. Lagrange interpolation polynomials L..

    /k4+q
    /f(xi) = yi

    L(f) = Σ yiPi and where Pi = Π (x-xj) / Π (xi-xj) and i ≠ j

    q)sum (ys%prd flip xs-p)*1f,/:eq each p:xs@r except/:r:til count xs

    example: f(-2 -1 1 3)=-15 -4 0 20 → L = x3-x2+x-1 and where ..

    q)nc:{(not 0 in deltas x)&x~asc x} /just eliminating the variational noise
    q)vr:{s::x;y {s cross x}/x}
    k)fx:{q@&nc'q:vr[1+!x;y]}
    k)vt:{+/*/+z@-1+fx[x;y]}
    k)eq:{c::#r::x;(c#-1 1f)*(+/r),{vt[c;x;r]}'1_!c}

    eq corresponds to Vieta: 1f,eq 3 8 5 .. (x-3)(x-8)(x-5)

    The sample with Lagrange above shows also, that the classical school-way
    representing solutions can be pretty misleading in finding effective algorithms:

    The 5 steps above leading to Vieta's representation we can put away in k4 by one single line:

    1f,(*s) {1 _ (0f,y*c)+(c:1f,x),0f}/1_s ...

    where s represents the negative roots of the polynomial...ie (x-s0)(x-s1)(x-s2)..(x-sn)=Πj(x-sj)

    14. Newton's interpolation polynomial derived from divided differences

    We define:

    f[xi]:=fi and f[xi,xi+1]:=(f[xi+1]-f[xi])/(xi+1-xi) .. and implicitly by math. induction we will find
    f[x0,x1,...,xn]=(f[x1,...,xn]-f[x0,...,xn-1])/(xn-x0) = Σ i (fi / Π j (xi-xj)) and i ≠ j and (i , j) in {0,1,..,n}

    /k4
    @[+/a*{(-o)#(o#0f),x}'1f,/:(-xs@0),1_g@--1_xs;-1+o:#xs;+;ys@0]

    /..and where
    g:{1f,(*x) {1 _ (0f,y*c)+(c:1f,x),0f}\1_x} /thats vieta
    a:{*x}'ys {(1_-':x)%y}\ {-/'+x@(r;-1+(r:-2_1_'{1_x}\ c)--1_c:!#x)}@xs

    although the expression above does trivial things, it does too much as only
    the upper leaf of the derivation tree is really needed ..

    example: f[-3 -1 2 6 7f] is -10 -6 6 8 48f .. will result in p(x)=0.115x4-0.56x3-1.755x2+6.86x+1.94

    15. interpolation via Newton's forward form (for equidistant abscissas)

    From the general divided difference form:

    pn(x) = f0 + Σ ij (x - xj))f[x0,..,xj+1], where i in (1,..,n) and j in (0,..,i) we substitute:
    x = x0 + sh and xi = x0 + ih and obtain pn(x) = Σ i C(s,i)Δ if0 where s = (x - x0)/h, C(s,i) is the
    usual number of combinations s over i and where Δ k+1fj = Δ kfj+1 - Δ kfj

    We want to know f[0.5] if f[0,0.2,0.4,0.6,0.8,1.0,1.2] = (0.44 , 0.58 , 1.5 , 2.2 , 2.9 , 3.6 , 5.1)
    /k4
    nf:{s:(x-i)%((y@1)-i:*y);+/(*\1f,(s-c)%1f+c:1f*!-1+#y)*{*x}'-1_{1_-':x}\z}
    nf[0.5;xs;ys] .. 1.863457

    16. construction of the H2n+1(x) Hermite's interpolating polynomial

    The 2n+1 comes from the 2n+2 conditions at x0,x1,..,xn for f(x) and f '(x).
    The issue point are the Lagrange polynomials Li(x) which are of grade n and in a natural way fulfill
    Li(xj) = δ ij (Kronecker symbol). Therefore H(x) = Σ i [ai + bi(x-xi)]L2i will be of grade 2n+1 and the x-xi will be
    cancelled immediately. This will leave us with the single statement ai = fi. With this we deduct that the
    bi = f 'i -2fiL'i(xi) and consequently H2n+1(x) = Σ i [1-2L'i(xi)(x-xi)]fiL2ii L2if 'i(x-xi).
    One other way to obtain H2n+1(x) would make use of f 'i = f [xi,xi], i.e. the repeated abscissas:
    H2n+1(x)=f0+(x-x0)f[x0,x0]+(x-x0)2f[x0,x0,x1]+(x-x0)2(x-x1)f[x0,x0,x1,x1] + .. + (x-xni(x-xi)2f[x0,x0, .. ,xn,xn]
    and where i in {0,1,..,n-1}

    /k4
    @[+/h*{(-j)#(j#0f),x}'1f,/:g@--1_*{-1_x}\xi;j-1;+;l] ..and where

    r:1_@[c;&^:c:(-':yi)%-':xi;:;dyi], g is Vieta again
    t:(l:*yi),h:(*r),,/1#'r {(1_-':x)%y}\-/'xi@+-1_'({1_x}\2_ix;{-1_x}\-2_ix:!j:#xi)

    example:

    f(-1)= -2, f(0)= -1, f(2) = 7 .. f '(-1)= -3, f '(0)= 4, f '(2) = 36
    xi: , /2#'-1 0 2f .. yi: , /2#'-2 -1 7f .. dyi:-3 4 36f .. will return p(x) = x5-4x3+4x-1

    Friendly hint: The "hard" way just by setting p(x)=x5+a1x4+a2x3+a3x2+a4x+a5 and going through the
    conditions would be much simpler and would result in a short trivial expression in k4. In terms of operations
    Hermite's approach is less tiresome.

    17. interpolation using rational functions

    In case the function we try to interpolate has poles or asymptotes we might run into problems with
    accuracy and the reality when using purely the polynomial approach. This new class of functions R(x) = P(x)/Q(x)
    where P(x) = Σ i pixi where i in (0,1,..,ζ) and Q(x) = Σ j qjxj where j in (0,1,..,η) seems to lead us
    into real trouble first .. contrary to the polynomial approach, we cannot guarantee that such an R(x) always
    exists for a given interpolation problem.

    R(x) is of dimension ζ + η + 1, and the number of conditions is given by R(xi) = yi where i in (0,..,n).
    So, n = ζ + η.

    The divided reciprocal differences will help to solve us a special type of rational interpolation ..

    1Ξ [xi,x0] := (xi-x0) / (f(xi)-f(x0))
    2Ξ [xi,x1,x0] := (xi-x1) / (1Ξ [xi,x0] - 1Ξ [x1,x0])
    etc.
    ψ Ξ [xi,xψ-1,..,x0] := (xi-xψ-1) / (ψ-1Ξ [xi,xψ-2,..,x0] - ψ-1Ξ [xψ-1,xψ-2,..,x0])

    renaming the xi to x and resolving the equations for any k Ξ [x,xk-1,..,x0] will leave us with
    Thieles chain-fraction. (Thorvald Nicolai Thiele was a brilliant researcher , astronomer and mathematician.
    He was also interested in issues of insurance and old-age pensions from a social point of view. Thiele emphasised
    that old age is the destiny of us all, therefore a pension system should be seen as a social contract between
    generations with the State as mediator rather than as an insurance.
    He was born in Copenhagen December 24th 1838 and died September 26th 1910. On his initiative
    the Danish Mathematical Society was founded in 1873)

    and so ..
    R(x) = c0 + (x-x0)|c1 + (x-x1)|c2 + (x-x2)|c3 + (x-x3)|c4 + (x-x4)| ... (x-xψ - 2)|cψ - 1 + (x-xψ - 1)|cψ
    where ck := kΞ [xk,xk-1, .. ,x0]

    We can show by math. induction that the following conditions are an invariant :

    (i) ζ = η = n/2 in case n is even
    (ii) ζ = (n+1)/2 and η = (n-1)/2 else

    In other words: 0 ≤ grad(P) - grad(Q) ≤ 1 .. which means can also have a linear asymptote.

    This construction defines in a natural way a 2 dimensional recursion ..
    p0 := c0 , q0 := 1 , p1 := c0c1 + (x-x0), q1 := c1 and again by math. induction we show that the recursion

    Θ k = ck Θ k - 1 + (x-xk-1) Θ k - 2 (where k > 1) remains invariant under the substitution
    ck := ck + (x-xk)/ck+1 for Θ k = pk resp. qk

    Example:

    Let be f(0.1)=1.11035 , f(0.2)=0.5441072 , f(0.3)=0.355784 , f(0.4)=0.2619114 , f(0.7)=0.14207 .. we look for f(0.16)

    /k4 .. xi,yi,A input

    f:{(1_x;1_(x-*x)%y-*y)};c::`float$();{if[1<#*x;c::c,,x:f . x;.z.s@x]}@(xi;yi)
    ci:{*x}'((,yi),,/1_'c)
    f1:{[x;y;z;u]((y*c)+(u-z)*x@1),c:x@0}

    /c0:*ci:{*x}'((,yi),,/1_'c) ; c1:ci@1 ; qk:(1f,c1)
    /pk:(c0,(c0*c1)+A-*xi)

    ({*x}'f1\[(|pk);h1;h2;A])%{*x}'f1\[(|qk);h1:2_ci;h2:1_-1_xi;A]

    With A:0.16 we get the value 0.685568 .. a result which is really breath-taking ..
    The underlying unknown function was s(x)=0.3 / (x*(3x+7)0.5) and s(0.16)=0.6855679
    The difference is only 1.023808e-007

    18. Cubic splines

    The term spline comes from the flexible spline devices used by shipbuilders and draftsmen to draw
    smooth shapes. In fact what they solved was a variational problem - they forced the splines to
    pass through given n+1 nodes so that the strain energy E = 0.5* ∫ [x0,xn]s''(x)2dx went to a minimum,
    in other words: δE = 0. So δE = ∫ [x0,xn]s''(x)δ s''(x)dx =
    Σ i s''(x)δs'(x) - s'''(x)δs(x)|[xi-1,xi] + Σ i[xi-1,xi] δ s(x) s(4)(x)dx = 0
    As f(xi) = yi the variation δs(x) has to vanish. δ is a continuous operator and so we have:
    δs'(x-) = δs'(x+) = δs'(x). This will lead us directly to:
    δE = s''(xn-)δs'(xn) - s''(xn+)δs'(x0) + Σi [s''(xi+) - s''(xi-)]δs'(xi) + Σ i[xi-1,xi] δ s(x) s(4)(x)dx = 0
    In other words: s''(x0) = 0 and also s''(xn) = 0 (natural spline), s(4)(x) = 0 for all x ≠ {x0,..,xn}
    and the "plug-in's" s''(xi+) = s''(xi-) for i in (1,..,n-1).

    We can define now: hi := xi+1 - xi and si(x) = ai(x-xi)3+bi(x-xi)2+ci(x-xi)+di

    The conditions above will result in:

    ai = (y''i+1 - y''i) / (6hi)
    bi = 0.5 y''i
    ci = ((yi+1 - yi)/hi) - (1/6)hi(y''i+1 + 2y''i)
    di = yi

    Where the moments y''i of the spline polynomials fulfill the equation system:

    hi-1y''i-1 + 2(hi-1 + hi)y''i + hiy''i+1 - 6(yi+1 - yi)/hi + 6(yi - yi-1)/hi-1 = 0

    The system matrix is symetrical, tri-diagonal and diagonally dominant.

    /k4
    ri:6f*(-':(1_-1_,/2#'1_-':yi)%1_-1_,/2#'hi)@-1+1_2*!t:#hi
    g:{@'[x;+(l;l:!#x);:;0f]}
    M:-g@(+m,,ri)%|/'m:(w,w)#(-u+1)_1_,/(1_{y,(2f*x+y),x}':hi),\:(u:-2+w:-1+t)#0f
    b:-1f*0f,({M$x,1f}/(#M)#1f),0f
    s:((1_{x-y}':b)%6f*hi;-1_0.5*b;((1_{x-y}':yi)%hi)-(1_{x+2f*y}':b)*hi%6f;-1_yi)

    example: f[-1 0 1 3] is 2 1 2 0 ..

    hi:1_-':xi:-1 0 1 3f
    yi:2 1 2 0f

    will result in 3 cubic polynomials:

    s0(x) = 0.6086957(x+1)3 - 1.608696(x+1) + 2
    s1(x) = -1.043478x3 + 1.826087x2 + 0.2173913x + 1
    s2(x) =0.2173913(x-1)3 - 1.304348(x-1)2 + 0.7391304(x-1) + 2


    19. Eulers φ function .. :

    determine φ (x)= 84 .. means numbers N which have 84 coprime elements
    to N and all x < N (solution below is a "student" version, don't try with high N's )

    t@&84={+/1={$[0=c:x-y*_ x%y;y;.z.s[y;c]]} ./:(1_!x),\:x}@/:t:2_!300

    returns .. 129 147 172 196 258 294

    20. numerical differentiation .. :

    the rules for the differentiation of grade 1 and 2 we can get directly from the Taylor expansion
    for higher terms and intermediate points we have go back to Newtons forward form ..
    and assuming the underlying approximation of f(x), which is pn(x)= ∑ i C(s,i) Δ if0
    and where C(s,i) are the combinations s over i.

    from f(x) ≈ pn(x) we follow that f(k)(x) ≈ h- kdk pn/dsk
    as example we try to determine f(4)(1.44) from the following data ..

    let be xd:1.1 1.2 1.3 1.4 1.5 1.6 and yd:1.078361 1.342136 1.628413 1.931481 2.244364 2.558908

    s:((u:1.44)-*xd)*ih:%h;rh:dh*dh:ih*ih
    rh*(*t)+(s-2f)*1_t:-2#{*x}'-1_{1_-':x}\yd .. -11.37996

    ..and ih*{y-x} ./:+(-1_yd;1_yd) will return the 1st derivatives starting
    from 1.2 .. 2.637754 2.86277 3.030681 3.128822 3.145447


    21 a) numerical integration .. trapezoidal method for single integrals

    We consider the integral ∫ J f(x)dx .. where J:=[a,b] and the approximating
    polynomial pn(x) = ∑ i f i A i , the A i = ∫ J li(x)dx are weights and the li(x) the Lagrange polynomials.
    The error ∫ J E(x) dx can be therefore expressed by the divided differences .. = ∫ J f[x0,x1,..,xn,x]∏ i(x-xi)dx

    At this point we can start to analyze the trapezoidal rule ..
    Given is now the underlying set of evenly distributed sub-intervals .. x0,x1,..,xn
    Lets look at the error-term Ek(x) = hn+1C(s;n+1)f(n+1)k) of Newton's forward where ξk in [xk-1,xk]
    and C(s;n+1) the classical combination s over n+1. For the trapezoidal rule we have in each interval n = 1
    ( .. approximation by a linear function) and dx=h ds. (be aware that h = (b-a)/n). Applying this to the
    error term, we will end up with ∫ JEk(x)dx = 0.5*h3[0,1]s(s-1)f ''(ξk)ds
    As s(s-1) is not negative over [0,1] and f '' continuous over (0,1) there must be one fix αk in [xk-1,xk] so that
    JEk(x)dx = 0.5*h3 f ''(αk )∫ [0,1]s(s-1) ds.
    For the whole interval J we will get: Error = -(1/12)*(b-a)h2f ''(α) and α in J. The integration rule
    is a simple adding of trapezoids ..0.5*h*(f0+2f1 + .. + 2fn-1+fn)

    tz:{[f;a;b;n]h*(0.5*neg (first r)+last r)+sum r:f a+(h:(b-a)%n)*til 1+n}

    Example with f(x) = (2*π)-0.5 e -0.5 x2 .. tz[f;-3;3;2000] .. 0.9973002 (where f:{r*exp -0.5*x*x} and
    r:1f%sqrt 4f*atan 0w)

    21 b) numerical integration .. trapezoidal method for n-dimensional integrals

    Let be Ω an n-dimensional rectangle defined by :{(x1,x2,..,xn)|x1,1 ≤ x1 ≤ x1,2 , x2,1 ≤ x2 ≤ x2,2, .. ,xn,1 ≤ xn ≤ xn,2}
    and tentatively assuming we are in a sufficiently foolish mood to evaluate the integral
    J = ∫ ∫ ∫ ∫ .. ∫ Ω ξ (x1,x2,..,xn)dx1dx2 .. dxn and where (xi,2 - xi,1)/mi = hi. Our issue point are the
    summations at i = n, ie ∫ ξ (x1,x2,..,xn,0)dxn = 0.5hn[ξ (x1,x2,..,xn,0)+2Σjnξ (x1,x2,..,xn,jn)+ξ (x1,x2,..,xn,mn)].
    The last expression we can define as Ξ[x1,x2,..,xn-1], which means each i dimensional mesh point will be multiplied
    by the same weight factor (integer exponent of 2). In other words:
    J = (Πihi)*2-n[ξ (x1,τ1,x2,τ2,..,xn,τn) + 2 Σinξ (x1,τ1,1,x2,τ1,2,..,xn,in)+ 2 Σin-1ξ (x1,τ2,1,x2,τ2,2,..,xn-1,in-1,xn,τ2,n) + .. +
    + 2 Σi1ξ (x1,i1,x2,τn,2,..,xn-1,τn,n-1,xn,τn,n) + 4 ΣΣi1,i2 ξ (x1,i1,x2,i2,x1,τ3,1,3,..,xn-1,τ3,1,n-1,xn,τ3,1,n) + .. + ..
    + 2nΣ Σ Σ Σ .. Σ i1,i2,..,inξ (x1,i1,x2,i2,..,xn,in) and where the τ values are the mesh points of the lateral surfaces
    and the ik are in (1,..,mk-1). Using the definitions Δ i := xi,2 - xi,1 and Ik := (xi,1,xi,2) the error of the method will
    collapse into the sum of integrals Σ jIj j Ξ[x1,..,xj] dx1 .. dxj - (1/12)Δ nhn2 ξ xn,..,xn(x1,x2,..,xn-1,β) and as
    j Ξ[x1,..,xj] in Cj we will end up with E = (-1/12)(Πi Δi) Σj [hj2 ξ xj,..,xj12,..,β n) ] for a (β 12,..,β n) in Ω

    ... so - for Ω = R2

    J = ∫ ∫ Ω f(x,y)dx dy where Ω :={(x,y)|a ≤ x ≤ b, c ≤ y ≤ d}. Let be h=(b-a)/m and k=(d-c)/n
    We can write J as 0.5h ∫ [a,b] ζ(x)dx + ∫ [a,b] E dx + F and where ζ(x) = 0.5k[f(x,y0)+2Σjf(x,yj)+f(x,yn)]
    and j in (1,..,n-1) and E=(-1/12)(d-c)k2fyy(x,β), F = (-1/12)(b-a)h2ζxx(x)

    Using the definitions: fi,j := f(xi,yj) we must obtain: for i in (1,..,m-1) and j in (1,..,n-1)

    0.25hk[f0,0+f0,n+fm,0+fm,n+2Σjf0,j+2Σjfm,j + 2Σifi,0+2Σifi,n +4ΣiΣjfi,j]. The evaluation of ζxx(x,β) is knfxx(x,β)
    and because fyy in CF[a,b] (set of cont. functions over [a,b]) ∫ [a,b] E dx is (-1/12)(b-a)(d-c)k2fyy22)
    So the total error will become (-1/12)(b-a)(c-d)[h2fxx11)+k2fyy22)] for some (α11),(α22) in our Ω

    Example: we want to integrate J = ∫ ∫ Ω (x2 + y sin (x/3))/(x+y2sin (y/3)) dxdy and where Ω = [2,3]x[1,4]

    diT:{[a;b;c;d;m;n;f]
    o:f .'/:+(a+(h:(b-a)%m)*ix:!1+m),'/:\:c+(k:(d-c)%n)*iy:!1+n
    :(h*k*0.25)*(4f*+/,/fr'q)+(2f*+/(,/fr'p),,/fl'q:fr@o)++/,/fl'p:fl@o
    }

    and where

    fl:{(,*x),,last x}
    fr:{-1_1_x}

    and so: diT[2;3;1;4;120;100;{((y*sin x%3f)+x*x)%x+y*y*sin y%3f}] .. 3.860015

    22. the order of a modulo m:

    I:log 1f*0Wj-1
    ord:{1_where 0=(-1j+(floor I%log r::x) {x*r}\ 1j) mod y}

    Example 1: ord[29;37] .. ,12 - means the order of 29 modulo 37 is 12
    ie 2912-1 = 353814783205469040j we can divide by 37

    Example 2: ord[53;163] .. ,9 - means the order of 53 modulo 163 is 9
    ie 539-1 = 3299763591802133j we can divide by 163

    23. composite double integral - functional boundaries

    J = ∫ ∫ Ω f(x,y)dx dy where Ω :={(x,y)|a ≤ x ≤ b, τ1(x) ≤ y ≤ τ2(x)}. Let be h=(b-a)/m again.
    For each thread we define: w(i) := [τ1(xi) - τ2(xi)] / n. We continue with the definitions:

    yij := yj(xi) := y0(xi) + j * w(i) and where j in (0,..,n) and i in (0,..,m). y0 ≡ τ1 and yn ≡ τ2.

    We set Φ (xi) := ∫ I f(xi,y)dy and where I := [τ1(xi),τ2(xi)]

    So, J = 0.5 * h * [Φ (x0) + Φ (xm) + 2 Σ Φ (xj)] and we end up with scalar multiplications:
    J = 0.25h * Φ (integrand grid) ⊗ Y (trapezoidal weights) ⊗ T (functional grid)

    f3:{(f2 x)-f1 x};phi:{y*x};a:1f;am:(mi:!1+m)*cm:%m:40;b:2f;an:(ni:!1+n)*cn:%n:20
    phival:|+phi .'' ((1+n)#'hmi),''{(f1 x)+an*f3 x}@/:hmi:a+mi*h:cm*b-a
    l:,2f,((m-1)#4f),2f
    0.25*h*+/+/phival*(n {x,q}/ q:,cn*f3 hmi)*+(r,((m-2) {x,l}/ l),r:l%2)

    Example: J = ∫ ∫ Ω xy dx dy and where Ω := [1,2] x [e-x;1 + ex] = 0.25 * 0.009090909.. * 22527.61 = 27.65043 ..
    the exact value is 27.65783..

    24. replacing the integrand by polynomials of grade 2 and 3 :

    Lets define:x0 = a, x1= 0.5(a+b) and x2 = b.
    issue point is Newton's form Σ i C(s,i) Δ i f 0 .. where C(s,i) the binomial coefficient
    and i in (0,1,2). J = ∫ Ω f(x)dx with Ω := [a,b], and we end up easily with (1/3)h [f0 + 4f1 + f2].
    Although Ψ2(x) = (x-x0)(x-x1)(x-x2) is changing the sign within Ω we can introduce the
    correction factor (x-x3) since ∫ Ω Ψ2(x)dx = 0. E(J) = ∫ Ω Ψ2(x)f[x0,x1,x2,x]dx.

    For convenience reasons we will choose x3 = x1 .. and so E(J) = -(1/90)h5f(4)(ω) and ω in Ω.
    Intersecting Ω into 2n sub-intervals with h=(b-a)/2n collapses into the sum E(J)=(-h5/90)Σ m f(4)m) where
    m in (1,..,n) and ωm in [x2m-2,x2m]. Because f in CF[a,b] (set of cont. functions over [a,b]) we deduct
    E(J) = -(1/180)h4(b-a)f(4)(ω) and J = (h/3)*[f0+4f1+2f2+..+4f2n-3+2f2n-2+4f2n-1+f2n]

    Example. J = ∫ Ω x4(5+ex) / (2x6+x5+1)dx with Ω := [0,4] ..

    b:4f;f:{(e*5f+exp x)%1f+x*(e:c*c:x*x)*1+2f*x};h:(b-a:0f)%n:50
    sum (h*(w,1_reverse w:1,(floor n*0.5)#4 2)*f a+h*key n+1)%3f .. 4.806506

    Hint:
    the trapezoidal method would require n=1460 to obtain the same precision - and is 10 times slower in the end.

    The derivation of the approximation using a cubic polynomial is trivial. We would end up with:

    E(J) = -(1/80)h4(b-a)f(4)(ω) and J = (h/3)*[f0+3f1+3f2+2f3+..+2f3n-3+3f3n-2+3f3n-1+f3n]

    b:4f;f:{(e*5f+exp x)%1f+x*(e:c*c:x*x)*1+2f*x};h:(b-a:0f)%n:42
    sum 0.375*h*(1,(-1_n#3 3 2),1)*f a+h*key n+1 .. 4.80653


    25. basis solutions .. :

    Let be h,m natural numbers and (a,m) = 1 .. means m coprime to the whole number a.
    If h is the lowest fulfilling ah ≡ 1 (mod m) then h is called the order of a mod m
    If h = φ (m) (and φ is Eulers function .. see chapter 19 above) then a is called basis solution modulo m .. ie a = β (m)

    k)jx:{-1j+x {x*y}/ (y-1)#x}
    {{c::y;({jx . (c;x)} each 1_key x) mod x} . (7j;x)} each 2 3 4 5 ..

    returns..

    /1 3 0 1 3 0 --> 2 is not a basis
    /2 1 5 3 4 0 --> 3 is a basis
    /3 1 0 3 1 0 --> 4 is not a basis
    /4 3 5 1 2 0 --> 5 is a basis

    {3,5} are the only basis solutions modulo 7. There is no need to look further.
    The number of basis solutions is φ (p - 1) for primes > 2.

    It is {c::y;({jx . (c;x)} each 1_key x) mod x} . (23j;5)

    4 1 9 3 19 7 16 15 10 8 21 17 20 12 18 2 14 5 6 11 13 0j .. ie. 522 = φ(23) ≡ 1 (mod 23)
    and indeed .. 23 x 103660251783288j = 2384185791015624j .. and so 5 = β (23)

    26. Gauss' method of mechanical quadrature .. (part 1 from 4):

    The classical result of Gauss states that for the range [-1,+1] the best accuracy is obtained
    by choosing the roots x0,x1, .. ,xn of the Legendre polynomials Pn(x) as abscissae
    With each xi there is a weight ωi associated with it and we state:

    J = ∫ Ω f(x)dx ≅ ∑ iωi f(xi) and where Ω = [-1,1] and i in (0,..,n)

    But lets look at it from a general aspect:

    Consider the integral: J = ∫ Ω ω(x)f(x)dx and ω(x) the weight function.
    J = ∑ i f(xi) ∫ Ωω(x)li(x) and where li(x) are the Lagrange polynomials.

    Pro memoria: li = Π (x-xj) / Π (xi-xj) and i ≠ j

    We also introduce ω(x) into the general error term with:

    E(J) = ∫ Ω Ψn(x) ω(x) (x-xn+1)f[x0,x1,...,xn+1,x]dx .. an expression we will get easily by
    replacing f[x0,x1,...,xn,x] by f[x0,x1,...,xn+1,x](x-xn+1)+f[xn+1,x0,x1,...,xn] together with the fact that ∫ Ω Ψn(x) = 0

    The concept:

    we introduce xn+2 and choose xn+1 so that ∫ Ω Ψn(x) ω(x) (x-xn+1)=0.
    Then E(J)= ∫ Ω Ψn(x) ω(x) (x-xn+1)(x-xn+2)f[x0,x1,...,xn+2,x]dx

    we introduce xn+3 and choose xn+2 so that ∫ Ω Ψn(x) ω(x) (x-xn+1)(x-xn+2)=0.
    Then E(J)= ∫ Ω Ψn(x) ω(x) (x-xn+1)(x-xn+2)(x-xn+3)f[x0,x1,...,xn+2,xn+3,x]dx

    etc..until

    we introduce x2n+1 and choose x2n so that ∫ Ω Ψn(x) ω(x) (x-xn+1)(x-xn+2)..(x-x2n)=0.
    Then E(J)= ∫ Ω Ψn(x) ω(x) (x-xn+1)(x-xn+2)(x-xn+3)..(x-x2n+1)f[x0,x1,...,xn+2,xn+3,..,x2n+1,x]dx

    This means Ψn(x) is orthogonal to all polynomials of grade < n+1 with weight function ω(x) and we can choose Ψn(x) as
    polynomial of grade n+1.

    If we choose x0=xn+1,x1=xn+2,..,xn=x2n+1 and applying the mean value theorem we can find such a η and ξ from Ω
    that E(J) = f[x0,x1,...,xn,x0,x1,...,xn,η]∫ Ωω(x)Ψ2n(x)dx = [f(2n+2)(ξ)/(2n+2)!] ∫ Ωω(x)Ψ2n(x)dx

    So Ψn(x) = pn(x)/αn+1 and αn+1 is the coefficient of the highest term in pn(x) and {pn(x)} the set of orthogonal polynomials
    with respect to ω(x).

    Let's choose ω(x) ≡ 1 and Ω = [-1,+1] and Gauss-Legendre quadrature:

    pn(x) = (n!2n)-1dn/dxn[(x2-1)n]

    n-th Legendre polynomial:

    k)p:*(n-1) {c:(((3f+2*n)*(r:x 0),0f)-(m:1f+n)*0f,0f,x 1)%2f+n:x 2;:(c;r;m)}/(1 0f;,1f;0f)

    roots of n-th Legendre polynomial ..

    k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

    and where

    k)pv:{(*x),{y+c*x}\[*x;1_x]}
    k)g:{(*r)#x*r:|!#x};s::()

    the weights..

    k)2f%(1f-s*s)*r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]} . (g@p;x)}'s

    Example: J=∫ Ω (6x3+13x2+101x-7)/[(x2+1)(x2+4x+20)]dx and Ω = [-1,+1]
    and using the Legendre polynomial of degree 12 ( = n)

    sum w*u each s .. and where u:{(-7f+x*101f+x*13f+x*6f)%(1f+x*x)*20f+x*4+x}
    will return: -0.70384426234338049 (exakt: -0.70384426234345321)

    in this case 12 digit precision with only 12 abscissa points !

    26. Gauss' method of mechanical chebyshev quadrature .. (part 2 from 4):

    J = ∫ Ω f(x)/(1-x2)0.5dx. (Ω = [-1,+1]). The orthogonal system to the weight-function ω(x) = (1-x2)0.5 are the
    Chebyshev-polynomials Tn := cos (n arccos x). Recursion: Tn+1(x) = 2xTn(x) - Tn-1(x) and roots xk=cos [π (2k+1)/(2n+2)]
    and the weights Aj=π/(n+1)

    k)+/(pi%n)*f {cos (pi*1+2*!x)%2+2*x-1} n
    and where pi: acos -1

    26. Gauss' method of mechanical hermite's quadrature .. (part 3 from 4):

    J = ∫ Ω f(x)e-x2dx. (Ω = [-∞,+∞]). The orthogonal system to the weight-function ω(x) = e-x2 are the
    Hermite-polynomials Hn := (-1)nex2dn/dxn(e-x2). As Recursion: Hn+1(x) = 2(xHn(x) - nHn-1(x))
    and the weights Ai=(2n+2(n+1)!π 0.5)/[H'n+1(xi)]2. Hermites's polynomials are the solutions
    of the special differential equation y''-2xy'+2ny = 0 (n natural number incl. 0) which appears with
    the harmonic oscillator in quantum mechanics.

    n-th hermite polynomial

    k)p:*(2 0;,1) {r:2*((c:x 0),0)-0,0,y*x 1;:(r;c)}/1_!n
    k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

    and where

    k)pv:{(*x),{y+c*x}\[*x;1_x]}
    k)g:{(*r)#x*r:|!#x};s::()

    the weights..

    k)b:sqrt acos -1
    k)(2 xexp w)*({*/1_!x} w:1+#s)*b%r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]}.(g@p;x)}'s

    26. Gauss' method of mechanical laguerre's quadrature .. (part 4 from 4):

    J = ∫ Ω f(x)e-xdx. (Ω = [0,+∞]). The orthogonal system to the weight-function ω(x) = e-x are the
    Pseudo-Laguerre-polynomials Λn := exdn/dxn(xne-x). n in (0,1,2..). As Recursion: Λk+1(x) = (2k+1-x)Λk(x) - k2 Λk-1(x)
    and the weights Ai=((n+1)!)2/(xi[Λ'n+1(xi)]2).

    Laguerre's polynomials we simply get by: Lk = Λk/k!.

    They are the canonical solutions of the 2nd order diff.eq. xy'' + (1-x)y' + ny = 0.
    This equation plays a role in quantum mechanics. It describes the radial part of the energy states
    of the hydrogen atom.

    n-th pseudo-Laguerre polynomial

    k)p:*(-1 1;,1) {r:(0,(1+2*y)*c)+(-1*((c:x 0),0))-0,0,y*y*x 1;:(r;c)}/1_!n
    k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

    and where

    k)pv:{(*x),{y+c*x}\[*x;1_x]}
    k)g:{(*r)#x*r:|!#x};s::()

    the weights..

    k)(u*u:{*/1_!x} w:1+#s)%s*r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]}.(g@p;x)}'s

    27(a) smallest non-cyclic multiplicative group

    g:{$[0=c:x-y*_ x%y;y;g[y;c]]} .. see chapter 3

    f:{c::x;w*/:w:1,r@where 1={g . (x;c)} each r:2_key x};
    h:{t:(f x) mod x;t ./:flip (p;p:key count t)}
    flip (l;h each l:2_key 12)

    2: ,1
    3: 1 1
    4: 1 1
    5: 1 4 4 1
    6: 1 1
    7: 1 4 2 2 4 1
    8: 1 1 1 1
    → all elements of the diagonal are the neutral element → order of all elements is 2 → 8 is smallest
    order for the non-cyclic multiplicative group
    9: 1 4 7 7 4 1
    10: 1 9 9 1
    11: 1 4 9 5 3 3 5 9 4 1

    27(b) generators in multiplicative groups

    g:{$[0=c:x-y*_ x%y;y;g[y;c]]} .. see chapter 3
    M:{c::x;(w*/:w:1,r@where 1={g . (x;c)} each r:2_key x) mod x}@n:19;
    1+&t={#?x}'*:''(-1+t) {(M . (-1+x@0;-1+r)),r:x@1}\'+(r;r:1+!t:#M)

    .. 2 3 10 13 14 15 means 6 generators for the multiplicative group modulo 19
    .. corresponds to the theory expecting φ(n-1) elements where aφ(n) ≡ 1 mod (n) and order = φ(n)
    An example which is in the end a nice affinity between classical number theory and group theory.

    28a) Runge-Kutta method of order 2

    Lets look at y' = f(x,y) and y(x0) = y0.

    Our issue point are the equations(using A as (xi,yi)) :

    k = h f(A)
    l = k f(A+ΔA) and where ΔA = (αh,βk)
    yi+1 = yi + ak+bl

    ..and the 4 unknowns α,β,a,b

    We see immediately (or not) that ΔA ≈ fx(A)αh + fy(A)βk . The compare (by order of h) with the
    classical Taylor serie and using the fact that y'' = fx + y'fy we will obtain 3 equations:
    a + b = 1, bα = 0.5 and bβ = 0.5. We can choose: a = b = 0.5 and α = β = 1

    Let's try to solve:

    y' = (y/x) + sin [(y-x)/x] with y(1) = 3, spacing h = 0.2
    and we need to determine y(1.2), y(1.4), y(1.6), y(1.8), y(2.0)

    f:{(y%x)+sin (y-x)%x};o:3f;a:1f;h:0.2;i:5
    o {l:h*f . A+(h,k:h*f . A:(y,x));x+0.5*l+k}\a++\0f,(i-1)#h

    .. and we will get: 3.78969,4.592525, 5.403301, 6.219094, 7.03819
    (exact values: 3.790758 4.594191 5.405322 6.221337 7.040578)

    28b) Runge-Kutta method of order 4

    We apply a similar procedure as for the method of order 2, and we will end up with:

    k = h f(xi,yi)
    l = h f(xi + 0.5h,yi + 0.5k)
    m = h f(xi + 0.5h,yi + 0.5l)
    p = h f(xi + h,yi + m)

    yi+1 = yi + (k + 2l + 2m + p)/6

    Let's try to solve:

    (x2 + 2xy - y2)dx + (y2 + 2xy - x2)dy with y(4) = 12, spacing h = 0.1
    and we need to determine y(2.0)

    f:{(d-c)%(d:(y-x)*y+x)+c:2f*x*y};o:12f;a:4f;h:-0.1;i:20
    last o {p:h*f . A + (h,m:h*f . A + 0.5*(h,l:h*f . A + 0.5*(h,k:h*f . A:(y;x))));x+(k+p+2f*l+m)%6f}\a++\0f,(i-1)#h

    .. and we will get: 11.403124236864 (exact: 11.403124237433 and -1.4031242374328!)

    29. quadratic rest mod prime p > 2:

    g:{0=(sum i<(floor (y*1_key 1+floor -0.5+i:x%2)) mod x) mod 2}
    {c:x;i@where g'[c]x:i:1_key x} each 17 29 31 37 53

    1 2 4 8 9 13 15 16
    1 4 5 6 7 9 13 16 20 22 23 24 25 28
    1 2 4 5 7 8 9 10 14 16 18 19 20 25 28
    1 3 4 7 9 10 11 12 16 21 25 26 27 28 30 33 34 36
    1 4 6 7 9 10 11 13 15 16 17 24 25 28 29 36 37 38 40 42 43 44 46 47 49 52

    means for example: x2 ≡ 24 mod 53 will have 2 solutions

    30. the predictor-corrector method for differential equations

    The predictor-corrector method is a multistep method. Multistep methods are usually very fast and the accuracy of the results is impressive.
    In a way, the linear multistep method represents a difference equation which approximates the differential equation. They are instruments for the (let's say) more sophisticated user. In our case we concentrate on multistep methods for a constant step size.

    In our approach to solve y' = f(x,y) = f(x,y(x)) = F(x) we will approximate F(x) by the interpolation polynomial pm(x) = Σ i to m (-1)i C(-s,i) Δ f-i.
    The C(-s,i) is the usual number of combinations of -s over i and the Δ f-i refer to the definition of Newton's interpolation polynomial
    in the backward form.

    Our general mapping of the abscissa points: x-r → xn-r and where r in {0,1,2,..,m}. We are looking for ∫ J f(x,y)dx and where J=[xn-p,xn+1].
    This integral we can represent by I = h∫ [-p,1]Σ i=0 to m(-1)iC[-s,i]Δ f-i ds as x = xn+sh.
    From the error definition of Newton's backward for we deduct that E(I) = hm+2[-p,1](-1)m+1C[-s,m+1]f(m+1)(ξ)ds

    If p=0 we talk about Adams-Bashforth methods, in case of p = 1 or 2 or 3 we talk about Milne's method's.

    Edward Arthur Milne: 14.2.1896 in Hull,Yorkshire,England - 21.9.1950 Dublin,Ireland:
    He was also interested in Relativity, Gravitation,World Structure and Kinematic Relativity (1948) - a theory which was an alternative to Albert Einstein's
    general theory of relativity. Very similar to Albert Einstein, Milne made people rethink old ideas and led to new approaches to the
    fundamental concepts of space and time. Milne's cosmological principle (where the universe is already infinite at the creation time)
    can be described entirely within Euclidean geometry. One of Milne's first books was "Thermodynamics of the Stars" - 1930. From 1943 - 1945
    Milne was the president of the Royal Astronomical Society in London. Milne died from a heart attack in Dublin while attending a conference of the
    Royal Astronomical Society

    John Couch Adams 5.6.1819 Lidkott,Cornwall, England - 21.1.1892 Cambridge: Adams was an astronomer and mathematician who, at the age of 24,
    was the first person to predict the position of a planet beyond Uranus. He never boasted of his achievements and in fact he refused a knighthood which
    was offered to him in 1847

    Francis Bashforth (1819-1912), inventor of the Bashforth chronograph. A fellow of St John's, he was professor of applied mathematics
    at Woolwich 1864-1872.

    Developing I above: I ≅ h[fn+0.5Δfn-1+(5/12)Δ2fn-2+(3/8)Δ3fn-3+(251/720)Δ4fn-4+..]

    So in case of the order 3 we will get yn+1=yn + (h/12)[23fn - 16fn-1 + 5fn-2] as predictor

    The formula's from Adams-Moulton we can derive by the classical interpolation polynomial from Newton and the fact that
    f[x0,x1,..,xk]=(Δkf0)/hk!k. And so by permuting we get f[xn+1,xn,..,xn-i+1] = (Δifn-i+1)/hi!i

    As x - xn-1 = h(s+i) we deduct that ∫ Jdy = h∫ [-p,1] Σ i=0 to m+1(-1)i C(1-s,i)Δ ifn+1-ids and J=[yn-p,yn+1]
    This means E(I) = h∫ [-p,1] Σ i=0 to m+2C(1-s,m+2) f(m+2)(ξ)ds and xn-m < ξ < xn+1

    For p = 0 and m = 1 we obtain easily the corrector: yn+1 = yn + (h/12)[5f(xn+1,yn+1) + 8fn - fn-1]

    Example: y' = cos (y-x) and y at -3,-2.8,-2.6 should be 2.565987,2.468988,2.35589
    We are looking for y at -2.4,-2.2,...,0,0.2,0.4,...,4.0,4.2

    f:{cos y-x};vx:|-3f+(h:0.2)*!n:3;vy:2.565987 2.468988 2.35589
    x2:5f*x1:h%12f;x3:23 -16 5f;x4:5 8 -1f;pr:{(*y)+x1*+/x3*f .'+(x;y)};c2:();j:0;

    do[#v:(*vx)+h*1+!34;

    zv:(*vy)+x1*+/(1_x4)*f .'+(-1_vx;-1_vy);vj:v@j;
    vy:n#(c1:{zv+x2*f . (vj;x)}/pr[vx;vy]),vy;vx:n#vj,vx;c2,:c1;j+:1;

    ]

    c2 .. 2.642632 2.693582 2.712187 .. 4.210839 4.394837 4.580019

    The maximal error will be in this case 0.001444267 in absolute value (not necessarily reached at end!!)

    Conclusions:

    The 4th order Runge Kutta method is great for smoothly resp non-smoothly varying problems,
    but lacks of a sufficient error control. The 3rd order predictor/corrector algorithm above represents
    already a good method. The 4th order predictor/corrector method is probably the best type
    of these methods. It has excellent stability and accuracy limits.

    31. quadratic reciprocity:

    /k4
    f0:{1-2*1=mod[x;2]}
    f1:{_ 0.25*(x-1)*y-1}
    g:{$[x>y;:((mod . (x;y));y;z);:(y;x;z*f0@f1 . (x;y))]}

    qr:{$[(x>2)&&/(x,y) in PRIMES;.z.s . g . (x;y;z);(x;y;z)]}

    /PRIMES 2 .. p

    Example:

    Does x2 ≡ 3821 (mod 252359) have solutions?

    qr . (3821;252359;1) .. returns a reduced form .. (15 173 1)

    Legendre:(15,173) = (3,173) • (5,173) = 1 as 173 ≡ 3 (mod 4) and multiplied by last element from the reduced form
    we get 1. This means the equivalence above does have 2 solutions. And indeed x1 = 65897 , x2 = 186462

    32a) higher orders of ordinary differential equations (ode's) and systems of ode's using Runge Kutta of order 2:

    /k4
    rk22:{(h+*x),(1_x)+0.5*(1_k)+h*F@\:x+k:h*1f,F@\:x}
    /where F represents the list of functions
    /h is the step size and x is the initial values

    Example:

    x3y''' + 15x2y'' + 60xy' + 60y = ln x

    h = 0.1 , y(1) = 3, y'(1) = 1.25, y''(1) = 0.77

    We are looking for y(a),y'(a),y''(a) where a in (1.1,1.2,1.3, ... ,1.9,2.0)

    With:

    F:({x@2};{x@3};{((log x0)-15*(4*x@1)+x0*(x0*x@3)+4*x@2)%x0*x0*x0:x@0})
    xv:1 3 1.25 0.77; by (10 rk22\ xv)@\:1 we will obtain the y(a):

    y(1.1) = 3.12885 (exact: 3.09924)
    y(1.2) = 3.10208 (exact: 3.10068)
    y(1.3) = 2.98739 (exact: 3.00910)
    y(1.4) = 2.82461 (exact: 2.85650)
    y(1.5) = 2.63874 (exact: 2.67156)
    y(1.6) = 2.44559 (exact: 2.47441)
    y(1.7) = 2.25491 (exact: 2.27774)
    y(1.8) = 2.07245 (exact: 2.08896)
    y(1.9) = 1.90130 (exact: 1.91199)
    y(2.0) = 1.74286 (exact: 1.74857)

    Friendly hint: decreasing the step size will improve the accuracy
    Example: in case h = 0.05 .. we will obtain y(1.1) = 3.10232, y(1.3) = 3.00212

    In case of h = 0.01 the outlier y(1.5) will become 2.67130

    In case of h = 0.001 we will reach a match with the 5 digit exact values.

    1000 rk22\ xv takes max 15ms on one 3Ghz CPU with a Barry White CD playing via Windows Media Player on the same machine

    32b) higher orders of ordinary differential equations (ode's) and systems of ode's using Runge Kutta of order 4:

    /k4
    e:1%6;a:0.5
    rk42:{p:h*F@\:x+h,m:h*F@\:x+a*h,l:h*F@\:x+a*h,k:h*F@\:x;(h+*x),(1_x)+e*k+p+2f*l+m}
    /where F represents the list of functions
    /h is the step size and x is the initial values

    Example:

    y''' + 2y'' + 5y' = x

    h = 0.1 , y(1) = 1.3, y'(1) = 0.5, y''(1) = 0.77

    We are looking for y(a),y'(a),y''(a) where a in (1.1,1.2,1.3, ... ,1.9,2.0)

    With:


    h:0.1;F:({x@2};{x@3};{(x@0)-(2f*x@3)+5f*x@2})
    xv:1 1.3 0.5 0.77; by (10 rk42\ xv)@\:1 we will obtain the y(a):

    y(1.1) = 1.353357 (exact: match )
    y(1.2) = 1.411582 (exact: match )
    y(1.3) = 1.472204 (exact: match )
    y(1.4) = 1.533183 (exact: match )
    y(1.5) = 1.592922 (exact: 1.592921)
    y(1.6) = 1.650260 (exact: 1.650258)
    y(1.7) = 1.704445 (exact: 1.704442)
    y(1.8) = 1.755094 (exact: 1.755091)
    y(1.9) = 1.802144 (exact: 1.802140)
    y(2.0) = 1.845794 (exact: 1.845790)



    32c) higher orders of ordinary differential equations (ode's) and systems of ode's using predictor/corrector method of the order 4:

    /k4
    c1:55 -59 37 -9f;c2:9 19 -5 1f
    ha:(h:0.1)*a:1%24;f:{(1_*x)+ha*+/'c1*/:F@'\:4#x};g:{(1_x@1)+ha*+/'c2*/:F@'\:4#@[x;0;:;y]}
    pcr4:init {(,y,(x@0,!#x) g/,y,f@x),x}/nxt
    /where F represents the list of functions
    /h is the step size,init - initial values, nxt - next abscissas

    Example:

    y''' + 2y'' + 2y' + y = x

    h = 0.1 , "init" below

    (x)5.3 (y)3.255877 (y')1.075979 (y'') -0.026863510
    (x)5.2 (y)3.148154 (y')1.078380 (y'') -0.021017100
    (x)5.1 (y)3.040222 (y')1.080152 (y'') -0.014276980
    (x)5.0 (y)2.932147 (y')1.081204 (y'') -0.006613907

    We are looking for y(a),y'(a),y''(a) where a in (5.4 ,..., 6.0)


    h:0.1;F:({x@2};{x@3};{(x@0)-(x@1)+2f*+/x@2 3})
    by init pcr4/ 5.4 5.5 5.6 5.7 5.8 5.9 6.0 we will obtain the matrix

    6.0 4.000189 1.048414 -0.046123930
    5.9 3.895118 1.052993 -0.045371760
    5.8 3.789594 1.057468 -0.044034900
    5.7 3.683630 1.061779 -0.042062910
    5.6 3.577246 1.065858 -0.039406340
    5.5 3.470469 1.069636 -0.036017420
    5.4 3.363331 1.073036 -0.031850660
    5.3 3.255877 1.075979 -0.026863510
    5.2 3.148154 1.078380 -0.021017100
    5.1 3.040222 1.080152 -0.014276980
    5.0 2.932147 1.081204 -0.006613907

    the match is 100% with the exact solution (to 6 decimal digits)

    33. A first look at parabolic partial differential equations ..

    As model case we look at the heat equation ∂u /∂t = η ∂2u / ∂x 2 (general case ut = η ∇ 2u)

    η is the coefficient of the thermal conductivity ..
    η = k /c ρ .. k = thermal conductivity [J/m s Kelvin], c = specific heat [J/kg Kelvin], ρ = density

    We are looking at a thin circular rod, length L - the rod has an isolated lateral side along the x-Axis
    Endpoint A(in the origin) is kept at temperature TA and endpoint B(0,L) is kept at temperature TB,
    in steady state condition. The temperature in A we change to TA1 instantly.

    The temperature profile change over time we can express with an explicite finite difference method..
    {ta1,(g@/:x@l),tb}/ v .. where g:{+/0.5*x}

    or with the implicite finite difference method (here the Crank-Nicolson form)
    n {v:0.25*ta21,(,/x),t2b;r:q . (v;h);M::m,'r;,-1_{(+/'M*\:x),1f}/st}/,w

    where n = number of time-intervals, ta21 = 2*ta1, t2b = 2*tb, st = 1 1 ... 1f .. a start vector
    w = initial temperature distribution, and using the convergence condition 1 = η Δ t / (Δ x)2.
    q is {(x@y)+x@y+2}, r the residuals in m defined by: ui,j+1 = 0.25 (ui+1,j + ui-1,j + ui+1,j+1 +ui-1,j+1)
    i index in the x-axis (x = iΔ x) and j the index of the time-intervals

    34. A side glance on Black-Scholes ..

    Let be S = asset, t = time, μ = drift rate , σ = volatility , X = Wiener process
    asset return = dS/S = μdt + σdX. As dX ≅ ξ t0.5 and ξ standardized Gauss distribution it
    follows that E(dS) = Sμdt. VAR(dS) = E(σ2S2dX2) = σ2S2dt as E(ξ2)=1.
    From f = f(S) follows df = (∂f/∂S)dS + 0.5(∂2f/∂S2)dS2. Considering the magnitudes we state:
    dS2 → σ2S2dt

    So, df = (∂f/∂S)dSσdX + ((∂f/∂S)dSμ + 0.5σ2S22f/∂S2)dt

    resp if f = f(S,t) and by using dS = SσdX + Sμdt

    we get df = σSdX∂f/∂S + (μS∂f/∂S + 0.5σ2S22f/∂S2 + ∂f/∂t)dt

    now we turn to Black-Scholes for options .. let be V value of one option (we don't care if it is a put or call at this stage)

    Let be portfolio Λ = V - ΔS with Δ (a multiple of the underlying asset) unchanged during time-jump dt
    And so:

    dΛ = σSdX∂V/∂S + (μS∂V/∂S + 0.5σ2S22V/∂S2 + ∂V/∂t)dt - Δ(Sμdt + σSdX) =
    (σS∂V/∂S - ΔσS)dX + (μS∂V/∂S + 0.5σ2S22V/∂S2 + ∂V/∂t - ΔSμ)dt

    Eliminating the randomness leaves us with dΛ = (0.5σ2S22V/∂S2 + ∂V/∂t)dt by choosing Λ = ∂V/∂S.
    Then the equation Λrdt = (0.5σ2S22V/∂S2 + ∂V/∂t)dt must hold (non-arbitrage principle)

    This implies the partial differential equation of Black-Scholes for options:

    - ∂V/∂t = 0.5σ2S22V/∂S2 + S∂V/∂S - rV = 0 (equation is parabolic for S > 0)

    We set: x = ln S

    So, - ∂V/∂t = 0.5σ22V/∂x2 + ν∂V/∂x -rV and where ν = r - δ - 0.5σ2
    in case δ a dividend yield and r the risk-less rate. (μ = r - δ).

    When applying Crank-Nicholson (CN) we have to consider that Δx ≥ σ(3Δt)0.5.
    The accuracy is O(Δx2 + 0.25Δt2) and with >800 iterations we will get very good results. The CN method
    is a fully centered method, because it replaces the space-time derivates with finite differences centered
    at an imaginary time step j + 0.5.
    Although we might get similar formula's like we would see in the purely stochastical trinomial method - the values
    pu, pm and pd turn into just "factors" and are no longer interpretable as probabilities.
    But it can be shown that this implicit difference method is equivalent to a generalized discrete stochastic process
    in which the asset price may jump to every node on the grid at the next time-step. The variance of the discrete
    process is an upward-biased approximation to the continuous time GBM(geometric brownian motion) process.

    The involved tri-diagonal matrix system we can blow away with a K one-liner:

    s:{y-pd*x%z}\[p:(r*pd)+ml@c;-1_2_|ml;u:-1_b:N2 {pm-ud%x}\ pm+pd]

    N2 = 2N .. and where 2N + 1 represents the number of jumps on the last time step
    ml .. payoff's on the last time step
    c .. 2N-1
    r .. residual (here for put-options: C-N+1,J - C-N,J ... J last time step, -N lowest jump)
    If Θ = σ2/Δx2 then pu,pd = -0.25Δt(Θ +/- ν/Δx) and pm = 1 + 0.5Δt(r + Θ)

    The crucial part of the CN method is an other K one liner:

    {(z-pu*x)%y}\[((last s)%pu+last b);|u;|-1_p,s]

    35. highest exponent of a prime ..

    k)+/_n%(_(log n)%log p) {p*x}\p

    p:23j;n:3000; ... 23135 divides 3000! and 135 is maximal.
    p:97j;n:27500; .. 97285 divides 27500! and 285 is maximal.

    k)-/+/_(c,n)%/:(_(log c:n*2)%log p) {p*x}\p

    p:83j;n:17121;... 83208 divides (2n-1)!! and 208 is maximal

    36. a first look at elliptic partial differential equations..

    Let's look at the equation ∂2u / ∂x2 + ∂2u / ∂y2 + ρ(x,y)/k = 0. We assume a metal plate with (A,B,C,D) as vertices.

    A := intersection of x = 0 and y = 0, B := intersection of x = a and y = 0
    C := intersection of x = a and y = b, D := intersection of x = 0 and y = b

    The left side (AD) is kept at the constant temperature ϑ1, the right side (BC) at ϑ2.
    We assume side (AB) is insulated and over the top side (DC) air of temperature ϑ is blowing.
    The heat at (DC) is escaping through heat convection.

    Let be:

    β the convection heat transfer coefficient , ϑ0 temperature at a given point on DC
    k thermal conductivity of the material , ρ(x,y) is the resistance heating function

    (1) No flux condition for the isolated part: -k uy | y = 0 = 0
    (2) Convection: - k uy | y = b = β (ϑ0 - ϑ)

    Just for practical reasons we choose Δ x = Δ y = h

    Let be i ∈ (0,1,2,...,M) the vertical grids, and j ∈ (0,1,2,..,N) the horizontal grid

    From (1) we follow: ui,1 = ui,-1
    From (2) we follow: ui,N+1 = - ξ ui,N + ui,N-1 + ξ ϑ and where ξ := 2hβ / k

    The ui,-1 and ui,N+1 are representing imaginary space elements.

    If ξ := h2 / k and gi,j := g(xi,yj) then the partial differential equation can be approximated by
    ui,j = 0.25(ui,j+1 + ui-1,j + ui+1,j + ui,j-1 + ξ gi,j) when using central differences.

    (A)The central part (j ∈ (1,2,..,N-1) and i ∈ (1,2,...,M-1)) of the plate we can represent by the "mpart" ..

    m0:(N-3) {x,,m}/ ,m:(M-1) fu/ 0
    q)mx:enlist (M-1) fs/ enlist 1 -4 1f
    q)m1:enlist (M-1) fu/ 1
    q)mpart:raze each flip raze each flip ((N-1),N+1)#raze (neg key N) rotate\:m3:m1,mx,m1,m0


    (B)The insulated part (j = 0 and i ∈ (1,2,...,M-1)) of the plate we can represent by the "upart" ..

    m0:(N-2) {x,,m}/ ,m:(M-1) fu/ 0
    q)m1:enlist (M-1) fu/ 2
    q)upart:flip raze mx,m1,m0


    (C)The convection part (j = N and i ∈ (1,2,...,M-1)) of the plate we can represent by the "bpart" ..

    q)m1:enlist (M-1) fu/ (-8)
    q)mx:enlist (M-1) fs/ enlist -4f,((8%k)*(2*k)+b*h),-4f
    q)bpart:flip raze m0,m1,mx


    with:

    fu:{y*(x,x)#@[(x*x)#0f;(x+1)*!x;:;1f]}
    q)fs:{flip -1 _ 1 _ flip (neg key x) rotate\:y,(x-1)#0f}


    As final part the residuals of the equation system ..

    r0:,/N#,@[(M-1)#0f;0,M-2;:;1f*T1,T2]
    q)r1:alpha * g .'0.1*(1_key M) cross key N
    resid1:-1f*r0+r1
    r0:(M-1)#Te*8f*h*b%k
    q)r1:4f*alpha*g .'0.1*(1_key M) cross N
    r2:@[(M-1)#0f;0,M-2;:;4f*T1,T2]

    resid:resid1,r1+r2+r0
    q)TempDistribution:((N+1),M-1)#(inv matr) mmu resid


    Friendly Hint: a direct indexing would be probably a more practical way making (A),(B) and (.C) un-necessary
    The whole code will become *much* shorter in the end...

    37. a first look at hyperbolic partial differential equations.. (explicit method)

    From a purely technical aspect the vibrating strings are not extremely meaningful, but they are good enough
    as a modelcase.

    Let be T the stress force. We look at a string of length L fixated in the origin of our coordinate system and also fixated at x = L.

    The forces in the y-direction:

    Fy(x) = - T sin Θ (x)
    Fy(x + Δx) = T sin Θ (x + Δx)

    The forces in the x-direction:

    Fx(x) = - T cos Θ (x)
    Fx(x + Δx) = T cos Θ (x + Δx)

    The deflection in x-direction is 0. The mass m = ρ Δs , where ρ = line density.
    So, m ∂ 2y/∂ t2 = - T sin Θ (x) + T sin Θ (x + Δx). But tan Θ (x) = ∂y/∂x.
    As ∂y/∂x much smaller than 1, we can neglect (∂y/∂x) 2 → ρ ∂ 2y/∂ t2 = T ∂ 2y/∂ x2

    2y/∂ t2 = c22y/∂ x2 is the wave equation, and c2 = T/ρ has the dimension of speed.

    The wave equation is a hyperbolic partial differential equation which can be solved by variable separation .. y(x,t) = X(x)T(t).
    As there are infinite eigen-frequencies we have exactly one n per solution:

    yn(x,t) = (sin xπn/L) (an sin tωn + bn cos tωn). If the initial speed in the y-direction is 0, then an = 0
    So, yn(x,t) = bn sin xπn/L cos tωn and where ωn = cπn/L

    = eigen-frequency The discrete parameter n is called quantum number in quantum physics.

    The explicit finite difference method for our 1-dim wave equation we can express simply by:

    k {(r;(0f,(+/(r:x@1)@h2),0f)-x@0)}/ (t1;t0)

    describing the state of the string at kΔt and fixated on the 2 ends.

    38. Re-visiting the year 1680

    /k4
    f:{((*x)*last x)-c*c:x@1}
    last'1_10 {(2#r),f@r:(+/u),u:2#x}\ (0 1f) ... 1 -1 1 -1 1 -1 1 -1 1 -1f ...

    Given a Fibonacci sequence {xn}

    Then xn+1xn-1 - x2n = xn+1(xn+1 - xn) - x2n = x2n+1 - xnxn+2
    So, if xn+1xn-1 - x2n = (-1)n then xnxn+2 - x2n+1 = (-1)n+1
    As for x0=0,x1=1,x2=1 this is true the 1st time, it will be true for all n.

    This proof was carried out the first time in the year 1680 by the Italian-French mathematician and astronomer
    Giovanni Domenico Cassini (1625 - 1712)

    39. The extended Euclidean Algorithm

    q)f:{d:x@0;c:x@1;(c;d mod c;d div c)}
    g:{(x,'((x0@0)-y*(x0:x@0)@1;(x1@0)-y*(x1:x@1)@1))[;1 2]}

    /a,b given where a > b
    last@+last (1,-1*q2;(-1*q1),1+(q2:last@u@1)*q1:last@u@0) g\ 2_(u:1_-3_f\ (a,b))[;2]

    a:23311;b:17331
    ..will return 6460 -8689 .. ie: 6460a - 8689b = 1 (as a,b coprime)

    The function above does not return a value if the job is to easy, this means if the iteration
    is finished already during the preparation step when the classical algorithm of Euclid on the
    right side returns a matrix "u" which has only one row .. for example a=11 , b=5 .. 11 = 2*5 + 1.
    In this case (1,-2) is the obvious result.

    Such a whole serie of trivial solutions we would obtain when solving:

    18x + 27y + 11z + 5w = 7 .. one solution: (x,y,z,w) = (1,1,-38,76)

    The extended Euclidean Algorithm can be traced to the "Aryabhatya" (approx. year 499) by Aryabhata[476-550]
    of northern India. His key contribution was in the field of the "number theory" - in special the Diophantine equations.
    Aryabhata found also a good estimate for π with 3.1460.


    40. a first look at hyperbolic partial differential equations.. (implicit method)

    /k4 .. solving u(x,t)
    n {(im$(1.6*i)+((0f,(x@1),0f) f2/:o);i:*x)}\ (u2;u1)}
    and where f2 can be like {(-1f*x@y)+0.1*sum x@y+1,-1}

    For the time derivatives we take the classical central differences, for the space derivatives we choose
    the average of the central differences of the known j-1 time points and the unknown j+1 time points.
    The "o" represents just a serie of supporting points.
    The n represents the number of future time-jumps. "im" represents the inverted characteristical system matrix.

    The start values u2 and u1 will be (in our case) completely determined by

    (a) the initial deflection at time j = 0; u(x;0) = h(x)

    and

    (b) the initial speed profile at time j = 0; ut(x;0) = s(x)

    41a) Eigenvalues and Eigenvectors, Powermethod

    Here we assume:
    real quadratic matrix M, |λ1| > |λ2| ≥ |λ3| ≥ ... ≥ |λn| and where λ1 real. xi Eigenvectors, αi real numbers.
    We define: y0 = Σj ∈ (1,..,n)αj xj. We further define projection Πi(v) = v@i.
    And so: Πi(yk + 1) / Πi(yk) = Πij ∈ (1,..,n) αj λjk+1 xj) / Πij ∈ (1,..,n) αj λjk xj) will converge to λ1.

    As |yk| → 0 resp ∞ for λ1 < 1 resp > 1, we introduce: ξj = ζjj and where κ j = maxu ∈ (1,..,n) uj)| and ζj = M ξj-1.
    Therefore Πik+1)/Πik) will converge against λ1, unless we are Casino heroes and we accidently choose for
    ξ0 the (unknown) value xj where j > 1. In this case the hijacked process will converge to |λj|.

    {(r%u;u:|/r:M$*x)}/(i0;0)

    i0 is our ξ0

    a more robust version:

    {u:(mv,-1f*mav)@((mav:|/-1f*r)>mv:|/r:m$*x);:((flat@r%u);u)}/(s;0)
    and flat:{@[x;&1e-14>abs x;:;0f]}

    41b) school definition of a determinant A

    |A| = ∑ σ ∈ S(n) (sgn σ) ∏ k ak,σ(k) and A = (ai,j) ; I(n) = {1,..,n} ; (i,j,k) ∈ I(n) ; S(n) all permutations over I(n)

    q)f:{x cross y}
    p:u@&n={#?x}'u:s f/(n-1)#,s:!n:#A;j:(1_s)_\:s
    q)i:raze raze (-1_s) cross/:'j
    signs:-1+2*(+/'>/'+:'p@\:i) in even
    /even ist just a set of positive even numbers beginning with 0

    sum signs**/',/A ./:/:/:,s,'/:p

    A defined as:

    19 18 21 18 11 23
    18 33 18 10 12 15
    17 21 20 18 13 12
    18 15 18 29 18 10
    19 20 11 27 18 15
    11 13 15 17 19 12
    ⇒ |A| = 12023

    41c) Jacobi's method ..

    Let be A,B similar matrices. ⇒ ∃ P with B = P-1AP ⇒ PAP-1 - λPP-1 = P(AP-1 - λP-1) = ..
    .. = P(A - λI)P-1. So, |B - λI| = |A - λI||PP-1| = |A - λI| ⇒ A,B have the same eigenvalues.
    But: assuming y = Px implies immediately that x ≠ y in general.
    ⇒ Ax = AP-1y ⇒ λPP-1y = PAP-1y ⇒ λy = By ⇒ A,B have in general different eigenvectors.

    If A is a n x n symmetric and real matrix ⇒ ∃ P an orthogonal transform with AP = PD and D diagonal.
    Intuitively this equation suggests already that there will be a sequence of orthogonal matrices {Oj}
    (product of orthogonal matrices is also orthogonal) with OjTA(j-1)Oj = A(j) , j > 0 and A(0) = A.
    Within this process let's always choose |ap,q(j)| ≥ |ak,s(j)| and p < q , k < s with p,q,k,s ∈ In:={1,..,n}.
    This implies: (ap,q(j))2 ≥ ΣrΣr ≠ u (a(j)ru)2 / (n2-n). r,u ∈ In

    In contrast to ΣrΣu(a(j)ru)2 invariant ∀j we observe that during the transformation process
    holds: ΣrΣr ≠ u (a(j+1)ru)2 = ΣrΣr ≠ u (a(j)ru)2 - 2(a(j)pq)2 and Σ ii (a(j+1))2 = Σ ii (a(j))2 + 2(a(j)pq)2

    But this means ΣrΣr ≠ u (a(j+1)ru)2 ≤ ΣrΣr ≠ u (a(j)ru)2 - 2ΣrΣr ≠ u (a(j+1)ru)2 / (n2-n) ≤ ..
    .. ≤ ΣrΣr ≠ u (a(j)ru)2 e-2/n2-n.

    And so: ΣrΣr ≠ u (a(j+1)ru)2 ≤ ΣrΣr ≠ u (a(0)ru)2 e-2j/n2-n

    consequently, if j goes to infinity then all squares of the non-diagonal elements will converge to 0.

    maximal element in the upper triangle .. m is the n x n matrix, m1 is the upper triangle 1-matrix
    fpq:{(y div x;y mod x)}
    pqv:m . pq:n fpq/h?|/h:abs@,/m*m1

    indices p,q and values of the orthogonal rotation matrix
    ppv:m . p,p:*pq
    qqv:m . q,q:pq 1

    elements of the rotation matrix
    lamb:2f*pqv;mu:qqv-ppv;omeg:sqrt@+/i*i:lamb,mu

    case when lamb>0 and mu>0; sphi = sin φ, φ = rotation angle
    sphi:sqrt (omeg-mu)%2f*omeg

    elements of the transformed similar matrix (1st move towards diagonalization)
    pqx:&~xn in pq
    h02:(cphi;-1f*sphi)
    bm[p;pqx]:+/'h02 */:l1:{m . x}''pq,'/:pqx
    bm[pqx;p]:+/'h02 */:l2:{m . x}''pqx,'\:pq
    h03:(sphi;cphi)
    bm[q;pqx]:+/'h03 */:l1
    bm[pqx;q]:+/'h03 */:l2

    the rotation matrix
    /Im:1f*nn#(n {x,1,n#0}/ ()),1 is static
    O:Im;O[q;q]:O[p;p]:cphi;O[q;p]:-1f*O[p;q]:sphi

    example 5 x 5 matrix:
    M:(2 5 1 4 -6;5 10 9 -7 10;1 9 -8 3 -6;4 -7 3 2 11;-6 10 -6 11 -3)

    eigenvalues:5.764826 , 17.29619 , -5.73115 , 11.01596 , -25.34583 (prd = 159554)
    eigenvectors(just rounded):

    e1: 4.775 , -0.498 , 1.236 , 2.512 , -1.531
    e2: 3.141 , 15.690 , 4.43 , -2.525 , 4.126
    e3: 2.555 , 0.1239 , -4.614 , -1.62 , 1.546
    e4: -1.522 , -0.026 , -1.164 , 7.955 , 7.375
    e5: -6.363 , 10.41 , -12.13 , 10.97 ,-15.03


    41d) QR method (here for real Eigenvalues)

    Each quadratic matrix A can be factorized as Q*R, where Q orthogonal and R a right triangular matrix.
    The key process consists in the multiplication by left by suitable rotation matrices. The multiplication
    will/should consecutively eliminate all matrix elements below the main diagonal. This elimination is carried
    out column by column. The QR-method is in the general case less accurate for positive definite matrices than
    the Jacobi method. The QR algorithms are very popular, widely used, numerically very stable (but not infallible)

    In order to eliminate the matrix element aij(m) in the m-the sequence A(m) = OmT A(m-1)
    we need the rotation matrix OmT = OT[ j ; i ;φm]. A(1) = A. m ∈ {1,2,..,0.5n(n-1)}
    The condition aij(m) = ajj(m-1)sin φm + aij(m-1)cos φm = 0 determines cos φm and sin φm.
    Below a student version (working in acceptable time up to 70 x 70 real matrices with real eigenvalues)
    of the QR-method

    QRc2:{aqp:x[y;z];aqq:x[y;y];app:x[z;z];
    sif:aqp%h22:sqrt (app*app)+aqp*aqp;cif:app%h22;
    bqj:(-1f*sif*c3:x[z;])+cif*c2:x[y;];bpj:(cif*c3)+sif*c2;
    x[y;]:bqj;x[z;]:bpj;aip:(cif*c3:x[;z])+sif*c2:x[;y];
    aiq:(-1f*sif*c3)+cif*c2;x[;z]:aip;x[;y]:aiq;
    x}

    do[n-1;o0:(j_l),\:j;i0:o0[;0];i1:o0[;1];
    while[1b in ERROR < abs@M ./: o0;M:QRc2/[M;i0;i1];i+:1];M[i0;j]:0f;j+:1]

    Example:

    8 4 6 1 3
    2 2 5 1 1
    6 5 1 1 3
    1 2 2 2 1
    3 8 6 1 6

    returns M value ..

    17.24937 -3.656361 -1.247661 -2.182439 +5.053954
    +0.00000 +3.223795 -0.939394 -1.619626 +3.482991
    +0.00000 +0.000000 -4.107825 +1.251698 -1.246878
    +0.00000 +0.000000 +0.000000 +1.917964 +0.102689
    +0.00000 +0.000000 +0.000000 +0.000000 +0.716699

    The eigenvalues we find in the diagonal.

    41e) Givens' rotations ..

    Contrary to the classical Jacobi approach we are looking for rotations which set all elements below the sub-diagonal
    successively to 0. For the elimination of the ai,r+1 with i > r+2 we will need (r+2,i)-rotations.

    We use the classical transformation a*i,j = aj+1,i sin φ + ai,j cos φ and where a*i,j has to become 0.
    When we calculate the conditions resulting from the equation above, we need to be aware that φ ∈ [-π/2 , π/2],
    so we are not free to set the signs arbitrarily.

    The k4 code (student version) resulting from all this consists basically of 3 key-steps:


    /the rotation pairs
    i:1+l:j+1;pq:l,'b:l _ t;
    /the rotation angle
    if[ERROR < abs@ij:A[i;j];h:sqrt@(u@1)+*u:d*d:jj,ij;w:h*sg@jj;cf:jj%w;sf:-1f*ij%w]
    /the transforms
    bpk:+/(cf,-1f*sf)*a:B[pqtmp;b] and akp:+/(cf,-1f*sf)*a:+A[;pqtmp]

    Example - the real 5x5 matrix below has to be transformed to Hessenberg form using Given's rotations .. :
    (This sample code works in acceptable time up to 150 x 150 real matrices)

    -12 40 12 28 11
    -14 17 11 14 10
    -25 15 11 10 12
    -13 51 11 15 17
    -90 98 97 90 96

    Result:

    -12.00 +23.22 +5.79 +12.99 -043.68
    -95.34 145.72 51.49 +62.42 -104.27
    +00.00 -23.23 -8.41 -06.39 +023.13
    +00.00 +00.00 +0.55 -14.58 -022.85
    +00.00 +00.00 +0.00 +04.51 +016.26

    41f) fast Givens' rotations (FGR) ..

    The concept is (in the first step) to factorize matrices A and A' using convenient diagonal matrices R and R':

    A = R A* and A' = R' A*'

    Replacing all elements of the classical (p,q)-Givens rotation will lead to:

    a'pj = c(dp/d'p)a*pj - s(dq/d'p)a*qj
    and
    a'qj = c(dp/d'q)a*pj - s(dq/d'q)a*qj

    This allows 4 possibilities to achieve one standalone term in each line, thus cutting the multiplication count by a half.
    In principle we could be happy with just one possibility, but in order to keep boundary rotation angles(those leading to vanishing values) away from our
    process (ie 0 and ± π/2) we should consider 2 alternatives. (|c| ≥ |s| and the opposite)

    The first alternative implies:

    a*'pj = a*pj + ζ-11a*qj
    a*'qj = - ζ-12a*pj + a*qj and where ζ1 ζ2 is the square of the co-tangens of the rotation angle.

    Considering the "wish" that a*'qk = 0 for a specific k, will lead to the astonishing insight that in fact will
    only need the square values of D during the entire iteration process(beginning with D = I).

    In other words .. (for the 1st alternative) : c2 = (d'p/dp)2 and c2 = (d'q/dq)2

    We will meet a corresponding situation when we use the rotations as similarity transforms:

    A = R A* R and A'' = R' A*'' R'. When we are done with the recursion, the final Hessenberg we receive by Rω½ A* Rω½

    /snippets .. student version (z1 and z2 are the zeta's above)
    z1:z2*dp%dq;t:z1*z2;c:1f%1f+1f%t;s:1f%1f+t
    /t>=1f;
    btsp:A[p;]+A[q;]%z1;btsq:A[q;]-A[p;]%z2
    A[p;]:btsp;A[q;]:btsq;]
    ...
    D[p;p]:c*D[p;p];D[q;q]:c*D[q;q]]
    /end..
    Ds:sqrt D;H:(Ds mmu A) mmu Ds;

    Example - the real 5x5 matrix below has to be transformed to Hessenberg form using fast Given's rotations .. :

    5 4 3 2 1
    4 6 0 4 3
    3 0 7 6 5
    2 4 6 8 7
    1 3 5 7 9

    Result(rounded here to 3 digits):

    5.000 +07.500 +00.000 +0.000 +0.000
    7.500 +26.125 -17.980 +0.000 +0.000
    0.000 -17.980 +18.350 -5.138 +0.000
    0.000 +00.000 -05.138 +7.843 -4.805
    0.000 +00.000 +00.000 -4.805 +7.086

    Because original matrix was symmetric we got a triangular form

    Friendly hint: using the classical Given's method can eventually lead to different signs,
    but a closer look will tell us that mathematically the results represent the same matrix.

    PS: the student version will not take the full benefit we expect. The code will be only 10-15% faster
    a more sophisticated implementation will converge against the expected 50% speed up.

    42. Hyman's method for Hessenberg-type matrices

    let be H := {hij}, where hij = 0 for i > j + 1 and where i,j ∈ In:={1,2,..,n}
    We further assume that all subdiagonal elements are not equal to 0. (otherwise the problem can be simplified)

    The method is based on the concept that for the Hessenberg-type we can construct a serie of multiplicators τ123 .. τn-1
    so that the linear combination ∑ k ∈ In τkωk would force all last-column elements to go to 0. (excluding h1n )
    The ωk are the column vectors of H - λI (unit matrix, λ = eigenvalue) and τn is a fictive value and we can set it to 1.
    This means we defined the following recursive process:

    τi = [- (hi+1,i+1 - λ)τi+1 - ∑ j ∈ {i+2,..,n}hi+1,jτj ] / hi+1,i

    The construction process tells us something else: the τk are in reality polynomials of λ, more concrete,
    τk is a polynomial of degree n-k in λ. This implies:

    ξi = [τi+1 - (hi+1,i+1 - λ)ξi+1 - ∑ j ∈ {i+2,..,n-1}hi+1,jτj ] / hi+1,i and where ξi = τ'i

    This means the original matrix H - λI can be represented as:

    P(λ) = (-1)n+1φ(λ) ∏ j ∈ In \ {1}hj,j-1 where φ(λ) = τ1(h11 - λ) + ∑ i ∈ In \ {1} h1,i τi and P(λ) the characteristical polynomial.
    It's derivative we can obtain easily, so that we are in the position now to apply Newton's interpolation method.

    the τi
    (-1f*%%/last B) {(((-1f*last y)-+/x*-1_1_y)%*y),x}/C

    the ξi
    {((z-+/x*-1_1_y)%*y),x}/[%*last B;C;-1_|xv];

    and where H is the Hessenberg Matrix, l = eigenvalue(start value), Im = unit matrix
    A:H-l*Im
    B:(-1* -1_n-!n)#'1_A
    C:|(n-2)#B

    As for Ξ := (τ12,..,τn)T holds (H - λ*I) Ξ = (φ(λ*),0,..,0)T, and φ(λ) = 0 for eigenvalue λ,
    the Ξ will represent the eigenvector for λ. Lyman's method at least theoretically gives us also the eigenvectors, but the
    process evolving the τi is not necessarily numerically stable. This means under certain circumstances (like a large n)
    we have to consider an alternative way for the eigenvectors.

    43. Reduced positive definite binary quadratic forms

    Δ(p,q,r,s) := ps - qr (determinant)
    Let be Z the set of whole numbers and g(x,y) := ax2 + bxy + cy2:=[a,b,c] a positive definite binary quadratic form. (b2 < 4ac)

    ρ is the smallest natural number represented by g(x,y) ⇒ ∃ (α,γ)∈ Z2 , ρ = g(α,γ). ⇒ ∃ (β,δ)∈ Z2 and Δ(α,β,γ,δ) = 1
    as ρ minimal. Then x = αξ + βφ and y = γξ + δφ will define a new equivalent g':=[ρ,κ,λ].
    Any further mapping ξ := x' - ϑy' and φ = y' defines a further equivalent g'':=[ρ,β',γ'], for any choice of ϑ ∈ Z.
    Choosing ϑ as the closest number to 0.5*κ/ρ will automatically lead to 0 ≤ |β'| ≤ ρ. But as g and g'' equivalent,
    γ' can be represented by g, in other words ρ ≤ γ' ⇒ 0 ≤ |β'| ≤ ρ ≤ γ' ⇒ g'' is reduced.

    The proof implicitely defines a recursive algorithm which we can easily express by a simple k4 line:

    h:{_last((b-a),(b:x@1)+a)%2f*a:x@0}
    qf:{if[(a:x@0)>c:x@2;x:|x];if[(a:x@0)< abs b:x@1;s:(x@2)+((-b)+k*a)*k:h@a,b;x:a,(abs b-2*a*k),s];:x}

    Sample:

    qf\ 1137 2247 1111f .. (1137 2247 1111f ; 1111 25 1f ; 1 1 955f)

    This means 1137x2 + 2247xy + 1111y2 ~ x2 + xy + 955y2 (.. the reduced form). In other words, the
    first polynomial we can replace by the 2nd as it generates the same set of numbers over Z2.

    44. Pell's Equation

    The diophantine equation ax2 + 1 = y2 is named after John Pell (1611 - 1685) "Pell's equation", but originates in reality from Pierre de Fermat (1607 - 1665)

    We start with the basic theorem:

    ξ0 = [a0,a1,...,ann+1] = (ξn+1hn + hn-1)/(ξn+1kn + kn-1) and where the hi and ki represent the CF (continued fraction)iteration. If d>0 and integer, then d0.5 represents a quadratic surd and therefore its CF must have a period (after an initial block). Let be r the length of the period.

    It also holds ξn+1 = (mn+1 + d0.5)/qn+1 Therefore we will end up in an equation of the form Ad0.5 = B, where A,B integers. Therefore A=B=0. And so the fate will tell us:

    h2n - dk2n = qn+1 (-1)n-1 .. using kn-1hn -knhn-1 = (-1)n-1. If n = tr-1 (t integer) the qn+1 will become 1. This immediately suggests that the CF iteration can give us a solution (for a certain index) for the classic equation x2 - dy2 = ± 1 We also see that in case r is even there cannot be a solution for x2 - dy2 = -1

    Example: x2 - 1389y2 = 1

    d:1389; k)m1:a0:_w:sqrt d;q1:d-m1*m1
    k)a1:_(w+m1)%q1
    k)d:a0,last'29 {o:((e:x@2)*j:*s:x@1)-t:*x;u:(s@1)+e*t-o;l:_(o+w)%u;:(o;(u,j);l)}\ (m1;(q1,1);a1)
    k)ki:*:' (0 1){((y*c)+x@1),c:x@0}\d
    k)hi:*:' (1 0){((y*c)+x@1),c:x@0}\d
    flip (hi;ki)

    37 # 1
    112 # 3
    149 # 4
    410 # 11
    969 # 26
    23666 # 635
    48301 # 1296
    120268 # 3227
    168569 # 4523
    625975 # 16796
    46490719 # 1247427
    140098132 # 3759077
    186588851 # 5006504
    ..

    and we find: 6259752 - 1389*167962 = 1

    We can go even a step further. For any |N| < d0.5, N integer and x2 - dy2 = N there must be an index n such that x = hn and y = kn because for any equation A2 - ϑB2 = φ with 0 < φ < ϑ0.5 (φ and ϑ can be real) it holds |A/B - ϑ0.5| < 1/2B2 .. A/B describes therefore an approximation to ϑ0.5. But our {hn/kn} are approximations (Hurwitz) ..

    Example: x2 - 4243y2 = 29

    d:4243;... etc

    65 # 1
    456 # 7
    1889 # 29
    4234 # 65
    6123 # 94
    40972 # 629
    251955 # 3868
    ..

    and we find: 4562 - 4243*72 = 29

    Let be x1 and y1 the minimal (positive) solutions for x2 - dy2 = 1, this means for any other solution pair (x,y) it holds x > x1 and y > y1.

    We further define: (x1 + y1d0.5)n = xn + ynd0.5
    As the conjugate of a product is equal to the product of the conjugates it most hold (x2n - y2nd) = (x21 - y21d)n = 1
    The assumption that there are other (positive) solutions (u,v) not belonging to {(xi,yi)} would contradict to the fact that (x1,y1) is minimal.

    Let's go back to our first example : x2 - 1389y2 = 1

    with (625975,16796) as smallest positive solution. Then for example x3 = x31 + 3x1y21d and y3 = 3x21y1 + y31d.
    represent the 3rd solution .. in other words (981139945893059575;26325694366773204) is a solution, too.

    45. Simple boundary values problems and finite differences .. a first look

    N:3;n:10;x0:0f;xn:1.570796;y0:4;yn:3.5
    k)xi:x0+(t:!n+1)*h:(xn-x0)%n
    f1:{(a0@x)-0.5*h*a1@x};f2:{(-2f*a0@x)+h*h*a2@x};f3:{(a0@x)+0.5*h*a1@x}
    v:{h*h*q@x} xii:1_-1_xi;b:N#'xii
    k)resid:@[v;(0,n-2);:;((*v),last v)-(y0,yn)*(f1@*xii),f3@last xii]

    F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n-1)#'(n-2)#0f
    iter:(inv ((-1*key n-1) rotate' F)[;1_-1_t]) mmu resid

    show (j1:y0,iter,yn),'j2:fx xi

    Let's look at the differential equation of the form:
    a0(x)y(n) + a1(x)y(n-1) + .. + an(x)y = c(x) .. and c(x),aj(x) continuous on the observed interval

    Using central differences of error order ~ h2 we replace the differentials:

    If k even: f(k) = [∇kfj+n/2 + Δkfj-n/2]*0.5*h-k else [∇kfj+(n-1)/2 + Δkfj-(n-1)/2]*0.5*h-k

    So for n = 2 we arrive to: yi-1[a0(x) - 0.5h a1(x)] + yi[h2a2(x) - 2a0(x)] + yi+1[a0(x) + 0.5h a1(x)] = h2c(x) and i in {1,..,n-1}

    Example: y'' + y'tan x = sin 2x .. y(0) = 4 and y(π/2) = 3.5 ⇒ y = 4 - x - 0.5sin 2x + 0.5(π - 1)sin x

    Hint: tan x undefined for π/2 .. so this equation would trigger some discussions .. the product y'tan x kills the discontinuity
    In this case for n = 10 already we get the following approximations .. (second column are the exact solutions..)

    4.000000 4.000000
    3.854364 3.855921
    3.719837 3.722842
    3.606148 3.610384
    3.520401 3.525551
    3.466112 3.471769
    3.442606 3.448286
    3.444864 3.450020
    3.463815 3.467858
    3.487067 3.489387
    3.500000 3.500000

    Using central differences for given boundary values y'(x0) = α and y'(xn) = β we have to take
    the additional fictive values y-1 and yn+1 into consideration as y1 - y-1 = 2hα and yn+1 - yn-1 = 2hβ
    Our code undergoes only minor adjustments

    g:{h*h*q@x};v:g xii:xi;b:N#'xii;
    F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n+1)#'n#0f
    G:((-1*key n+1) rotate' F)[;t,n+1 2]
    G:(enlist l,o),G,enlist (o:n#0f),l:-1 0 1f

    iter:(inv G) mmu (yd0*2*h),v,ydn*2*h
    show (1_-1_j1:iter),'j2:fx xi

    Example: x2y'' - 2x y' + 2y = 2x3 - x ; y'(1) = 4 and y'(3) = 28 returning (for n=10) ...

    1.293359 1.274653
    2.221625 2.210453
    3.448289 3.445770
    5.016969 5.023784
    6.972381 6.988891
    9.359968 9.386294
    12.22569 12.26176
    15.61586 15.66146
    19.57711 19.63187
    24.15623 24.21971
    29.40023 29.47188



    46. shooting methods .. one simple intro:

    Let's look at the boundary value problem:

    a0(x)y'' + a1(x)y' + a2(x) = p(x) and y(a)= α , y(b) = β

    If y1(x) and y2(x) are 2 different solutions of the problem above, then we can replace it
    by 2 initial value problems:

    a) y1(a) = α , y1'(a) = ε1
    b) y2(a) = α , y2'(a) = ε2

    where we for convenience reasons take ε1 = 1 and ε2 = -1

    Then we obtain y(x) as a linear combination of y1(x) and y2(x) .. y(x) = μ1 y1(x) + μ2y2(x), so:

    μ1 = (β - ϑ2)/(ϑ1 - ϑ2) and μ2 = (ϑ1 - β)/(ϑ1 - ϑ2) and where ϑi are the numerical values we received from a) and b)

    Example: x2y'' - 2xy' - 4y = x sin x + (x2 + 16)cos x .. and y(1) = 1 and y(π) = 2, n = 10

    f2:{((2*x*z)+(4*y)+(x*sin x)+(16+x*x)*cos x)%x*x}
    /using runge kutta of order 2 here
    g:{c1:h*p:z;c2:h*q:f2[x;y;z];:(x+h;y+s*p+z+c2;z+s*q+f2[x+h;y+c1;z+c2])}
    Y1:n {g . x}\ (x0;y0;1)
    Y2:n {g . x}\ (x0;y0;-1)
    /done
    ((Y1[;1]*beta-R2)+(R1-beta)*Y2[;1])%(R1:Y1[n;1])-R2:Y2[n;1]

    solutions..

    /col 1 = xi, col 2 = exact, col 3 = runge 2 approximation

    /1.000000 1.000000 1.000000
    /1.214159 0.679182 0.690431
    /1.428319 0.614242 0.628830
    /1.642478 0.704365 0.719082
    /1.856637 0.886659 0.900103
    /2.070796 1.116741 1.128293
    /2.284956 1.360600 1.370001
    /2.499115 1.590962 1.598108
    /2.713274 1.785609 1.790450
    /2.927433 1.926617 1.929091
    /3.141593 2.000000 2.000000

    a very similar approach we can take for the boundary values y'(a)= α , y'(b) = β

    .. .. to be continued in displays (PART 2) ..

  • Back to MAIN


  • ©++ MILAN ONDRUS