Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[X-PR] Update OrdinaryDiffEq.jl and fix Navier-Stokes solver #917

Merged
merged 17 commits into from
May 27, 2024

Conversation

termi-official
Copy link
Member

@termi-official termi-official commented May 16, 2024

Almost a fix for #508 . The new main idea is to eliminate the Dirichlet dofs from the mass matrix (provided in the solver ctor) and to provide the Jacobian (also with eliminated Dirichlet dofs) manually.

TODOs

  • Update docstrings
  • Fix gmsh mesh. Hole does not work again. :) @koehlerson
  • Decouple the Dirichlet rate function
  • Decide what to do with the other (new) examples
  • Fix temporal adaptivity

This type of integration is still not very clean, but at least we obtain quadratic convergence now (manually checked on a locally patched version of OrdinaryDiffEq.jl).

Copy link

codecov bot commented May 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.30%. Comparing base (39776b3) to head (d383def).

Additional details and impacted files
@@            Coverage Diff             @@
##           mk/nsgmsh     #917   +/-   ##
==========================================
  Coverage      93.30%   93.30%           
==========================================
  Files             36       36           
  Lines           5257     5257           
==========================================
  Hits            4905     4905           
  Misses           352      352           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@termi-official
Copy link
Member Author

Dump.

Dirichlet rates

"""
    compute_dirichlet_rates!(dudt::AbstractVector, ch::ConstraintHandler, time::Real)
"""
function compute_dirichlet_rates!(dudt::AbstractVector, ch::ConstraintHandler, time::Real)
    @assert ch.closed[]
    for (i, dbc) in pairs(ch.dbcs)
        # If the BC function only accept one argument, i.e. f(x), we create a wrapper
        # g(x, t) = f(x) that discards the second parameter so that _update! can always call
        # the function with two arguments internally.
        if hasmethod(dbc.f, Tuple{Any,Any})
            # Function barrier
            _compute_dirichlet_rates!(dudt, dbc.f, dbc.faces, dbc.local_face_dofs, dbc.local_face_dofs_offset,
            dbc.components, ch.dh, ch.bcvalues[i], ch.dofmapping, ch.dofcoefficients, time)
        else # Time independent function
            dudt[dbc.components] .= 0.0
        end
    end
end

# for vertices, faces and edges
function _compute_dirichlet_rates!(dudt::Vector{T}, f::Function, boundary_entities::Set{<:BoundaryIndex}, local_face_dofs::Vector{Int}, local_face_dofs_offset::Vector{Int},
    components::Vector{Int}, dh::AbstractDofHandler, boundaryvalues::BCValues,
    dofmapping::Dict{Int,Int}, dofcoefficients::Vector{Union{Nothing,DofCoefficients{T}}}, time::Real) where {T}

    cc = CellCache(dh, UpdateFlags(; nodes=false, coords=true, dofs=true))
    for (cellidx, entityidx) in boundary_entities
        reinit!(cc, cellidx)

        # no need to reinit!, enough to update current_entity since we only need geometric shape functions M
        boundaryvalues.current_entity[] = entityidx

        # local dof-range for this face
        r = local_face_dofs_offset[entityidx]:(local_face_dofs_offset[entityidx+1]-1)
        counter = 1
        for location in 1:getnquadpoints(boundaryvalues)
            x = spatial_coordinate(boundaryvalues, location, cc.coords)
            bc_rate = if length(components) > 1
                Vec(ntuple(i->ForwardDiff.derivative(t->f(x, t)[i], time), length(components))) # Whelp
            else
                ForwardDiff.derivative(t->f(x, t), time)
            end
            @assert length(bc_rate) == length(components)

            for i in 1:length(components)
                # find the global dof
                globaldof = cc.dofs[local_face_dofs[r[counter]]]
                counter += 1

                dbc_index = dofmapping[globaldof]
                # Only DBC dofs are currently update!-able so don't modify the rate
                # for affine constraints
                if dofcoefficients[dbc_index] === nothing
                    dudt[globaldof] = bc_rate[i]
                    @debug println("prescribing rate $(bc_rate[i]) on global dof $(globaldof)")
                end
            end
        end
    end
end

# for nodes
function _compute_dirichlet_rates!(dudt::Vector{T}, f::Function, ::Set{Int}, nodeidxs::Vector{Int}, globaldofs::Vector{Int},
                  components::Vector{Int}, dh::AbstractDofHandler, facevalues::BCValues,
                  dofmapping::Dict{Int,Int}, dofcoefficients::Vector{Union{Nothing,DofCoefficients{T}}}, time::Real) where T
    counter = 1
    for nodenumber in nodeidxs
        x = get_node_coordinate(get_grid(dh), nodenumber)
        bc_rate = if length(components) > 1
            Vec(ntuple(i->ForwardDiff.derivative(t->f(x, t)[i], time), length(components))) # Whelp
        else
            ForwardDiff.derivative(t->f(x, t), time)
        end
        @assert length(bc_rate) == length(components)
        for v in bc_rate
            globaldof = globaldofs[counter]
            counter += 1
            dbc_index = dofmapping[globaldof]
            # Only DBC dofs are currently update!-able so don't modify inhomogeneities
            # for affine constraints
            if dofcoefficients[dbc_index] === nothing
                dudt[globaldof] = v
                @debug println("prescribing rate $(v) on global dof $(globaldof)")
            end
        end
    end
end
# First we load Ferrite and some other packages we need
using Ferrite, SparseArrays, BlockArrays, LinearAlgebra, UnPack, LinearSolve
# Since we do not need the complete DifferentialEquations suite, we just load the required ODE infrastructure, which can also handle
# DAEs in mass matrix form.
using OrdinaryDiffEq

# We start off by defining our only material parameter.
ν = 1.0/1000.0; #dynamic viscosity

# x_cells = round(Int, 55/3)                  #hide
# y_cells = round(Int, 41/3)                  #hide
x_cells = round(Int, 1)                  #hide
y_cells = round(Int, 1)                  #hide
grid = generate_grid(Quadrilateral, (x_cells, y_cells), Vec{2}((0.0, 0.0)), Vec{2}((0.55, 0.41)));   #hide

# ### Function Space
# To ensure stability we utilize the Taylor-Hood element pair Q2-Q1.
# We have to utilize the same quadrature rule for the pressure as for the velocity, because in the weak form the
# linear pressure term is tested against a quadratic function.
ip_v = Lagrange{RefQuadrilateral, 2}()^2
qr = QuadratureRule{RefQuadrilateral}(4)
cellvalues_v = CellValues(qr, ip_v);

ip_p = Lagrange{RefQuadrilateral, 1}()
cellvalues_p = CellValues(qr, ip_p);

dh = DofHandler(grid)
add!(dh, :v, ip_v)
add!(dh, :p, ip_p)
close!(dh);

# ### Boundary Conditions
# As in the DFG benchmark we apply no-slip conditions to the top, bottom and
# cylinder boundary. The no-slip condition states that the velocity of the
# fluid on this portion of the boundary is fixed to be zero.
ch = ConstraintHandler(dh);
                                   #src
nosplip_face_names = ["top", "bottom"]                                  #hide
∂Ω_noslip = union(getfaceset.((grid, ), nosplip_face_names)...);
noslip_bc = Dirichlet(:v, ∂Ω_noslip, (x, t) -> Vec((0.0,0.0)), [1,2])
add!(ch, noslip_bc);

# The left boundary has a parabolic inflow with peak velocity of 1.5. This
# ensures that for the given geometry the Reynolds number is 100, which
# is already enough to obtain some simple vortex streets. By increasing the
# velocity further we can obtain stronger vortices - which may need additional
# refinement of the grid. Note that we have to smoothly ramp up the velocity,
# because the Dirichlet constraints cannot be properly enforced yet, causing
# convergence issues.
∂Ω_inflow = getfaceset(grid, "left");

vᵢₙ(t) = 1.5/(1+exp(-2.0*(0.0-2.0)))  #inflow velocity
parabolic_inflow_profile(x,t) = [4*vᵢₙ(t)*x[2]*(0.41-x[2])/0.41^2, 0.0] # TODO vec
inflow_bc = Dirichlet(:v, ∂Ω_inflow, parabolic_inflow_profile, [1,2])
add!(ch, inflow_bc);

# The outflow boundary condition has been applied on the right side of the
# cylinder when the weak form has been derived by setting the boundary integral
# to zero. It is also called the do-nothing condition. Other outflow conditions
# are also possible.
∂Ω_free = getfaceset(grid, "right");

∂Ωref = Set([VertexIndex(1,1)])
pressure_bc = Dirichlet(:p, ∂Ωref, (x,t) -> 0.0)
add!(ch, pressure_bc);

close!(ch)
update!(ch, 0.0);

# ### Linear System Assembly
# Next we describe how the block mass matrix and the Stokes matrix are assembled.
#
# For the block mass matrix $M$ we remember that only the first equation had a time derivative
# and that the block mass matrix corresponds to the term arising from discretizing the time
# derivatives. Hence, only the upper left block has non-zero components.
function assemble_mass_matrix(cellvalues_v::CellValues, cellvalues_p::CellValues, M::SparseMatrixCSC, dh::DofHandler)
    ## Allocate a buffer for the local matrix and some helpers, together with the assembler.
    n_basefuncs_v = getnbasefunctions(cellvalues_v)
    n_basefuncs_p = getnbasefunctions(cellvalues_p)
    n_basefuncs = n_basefuncs_v + n_basefuncs_p
    v▄, p▄ = 1, 2
    Mₑ = PseudoBlockArray(zeros(n_basefuncs, n_basefuncs), [n_basefuncs_v, n_basefuncs_p], [n_basefuncs_v, n_basefuncs_p])

    ## It follows the assembly loop as explained in the basic tutorials.
    mass_assembler = start_assemble(M)
    for cell in CellIterator(dh)
        fill!(Mₑ, 0)
        Ferrite.reinit!(cellvalues_v, cell)

        for q_point in 1:getnquadpoints(cellvalues_v)
            dΩ = getdetJdV(cellvalues_v, q_point)
            ## Remember that we assemble a vector mass term, hence the dot product.
            ## There is only one time derivative on the left hand side, so only one mass block is non-zero.
            for i in 1:n_basefuncs_v
                φᵢ = shape_value(cellvalues_v, q_point, i)
                for j in 1:n_basefuncs_v
                    φⱼ = shape_value(cellvalues_v, q_point, j)
                    Mₑ[BlockIndex((v▄, v▄), (i, j))] += φᵢ ⋅ φⱼ * dΩ
                end
            end
        end
        assemble!(mass_assembler, celldofs(cell), Mₑ)
    end

    return M
end;

# Next we discuss the assembly of the Stokes matrix appearing on the right hand side.
# Remember that we use the same function spaces for trial and test, hence the
# matrix has the following block form
# ```math
#   K = \begin{bmatrix}
#       A & B^{\textrm{T}} \\
#       B & 0
#   \end{bmatrix}
# ```
# which is also called saddle point matrix. These problems are known to have
# a non-trivial kernel, which is a reflection of the strong form as discussed
# in the theory portion if this example.
function assemble_stokes_matrix(cellvalues_v::CellValues, cellvalues_p::CellValues, ν, K::SparseMatrixCSC, dh::DofHandler)
    ## Again, some buffers and helpers
    n_basefuncs_v = getnbasefunctions(cellvalues_v)
    n_basefuncs_p = getnbasefunctions(cellvalues_p)
    n_basefuncs = n_basefuncs_v + n_basefuncs_p
    v▄, p▄ = 1, 2
    Kₑ = PseudoBlockArray(zeros(n_basefuncs, n_basefuncs), [n_basefuncs_v, n_basefuncs_p], [n_basefuncs_v, n_basefuncs_p])

    ## Assembly loop
    stiffness_assembler = start_assemble(K)
    for cell in CellIterator(dh)
        ## Don't forget to initialize everything
        fill!(Kₑ, 0)

        Ferrite.reinit!(cellvalues_v, cell)
        Ferrite.reinit!(cellvalues_p, cell)

        for q_point in 1:getnquadpoints(cellvalues_v)
            dΩ = getdetJdV(cellvalues_v, q_point)
            # Assemble local viscosity block of $A$
            #+
            for i in 1:n_basefuncs_v
                ∇φᵢ = shape_gradient(cellvalues_v, q_point, i)
                for j in 1:n_basefuncs_v
                    ∇φⱼ = shape_gradient(cellvalues_v, q_point, j)
                    Kₑ[BlockIndex((v▄, v▄), (i, j))] -= ν * ∇φᵢ ⊡ ∇φⱼ * dΩ
                end
            end
            # Assemble local pressure and incompressibility blocks of $B^{\textrm{T}}$ and $B$.
            #+
            for j in 1:n_basefuncs_p
                ψ = shape_value(cellvalues_p, q_point, j)
                for i in 1:n_basefuncs_v
                    divφ = shape_divergence(cellvalues_v, q_point, i)
                    Kₑ[BlockIndex((v▄, p▄), (i, j))] += (divφ * ψ) * dΩ
                    Kₑ[BlockIndex((p▄, v▄), (j, i))] += (ψ * divφ) * dΩ
                end
            end
        end

        ## Assemble `Kₑ` into the Stokes matrix `K`.
        assemble!(stiffness_assembler, celldofs(cell), Kₑ)
    end
    return K
end;

# ### Solution of the semi-discretized system via DifferentialEquations.jl
# First we assemble the linear portions for efficiency. These matrices are
# assumed to be constant over time.
T = 1.0
Δt₀ = 0.1
Δt_save = 0.025

M = create_sparsity_pattern(dh);
M = assemble_mass_matrix(cellvalues_v, cellvalues_p, M, dh);

K = create_sparsity_pattern(dh);
K = assemble_stokes_matrix(cellvalues_v, cellvalues_p, ν, K, dh);

u₀ = zeros(ndofs(dh))
apply!(u₀, ch);
jac_sparsity = create_sparsity_pattern(dh);

struct RHSparams
    K::SparseMatrixCSC
    ch::ConstraintHandler
    dh::DofHandler
    cellvalues_v::CellValues
    u::Vector
end
p = RHSparams(K, ch, dh, cellvalues_v, copy(u₀))

function ferrite_limiter!(u, integrator, p, t)
    update!(p.ch, t)
    apply!(u, p.ch)
end

function stokes!(du,u_uc,p::RHSparams,t)
    @show "rhs",t 
    @unpack K,ch,dh,cellvalues_v,u = p

    u .= u_uc
    update!(ch, t)
    apply!(u, ch)

    mul!(du, K, u) # du .= K * u

    # Ferrite.compute_dirichlet_rates!(du, ch, t)
    # apply_zero!(du, ch)
end;

function stokes_jac!(J,u,p,t)
    @show "jac", t

    @unpack K = p

    J .= K

    # apply!(J, ch)
end;

@show M .- Δt₀*K
@show det(M .- Δt₀*K)

# u = copy(u₀)
# un = copy(u₀)
# du = copy(u₀)
# pvd = paraview_collection("stokes-debug.pvd");
# for t in 0.0:Δt₀:T
#     # Setup linear system
#     A = M .- Δt₀*K # K = J for linear problem
#     # apply!(A, ch) # This should happen here
#     # Setup residual
#     r = M*un
#     # Inner solve
#     u .= A \ r
#     @show u
#     # Rate update
#     du .= K*u #stokes!(du,u,p,t)
#     @show res = norm(M*(u - un)/Δt₀ - du)
#     # Update solution
#     un .= u
#     # Write back
#     vtk_grid("stokes-debug-$t.vtu", dh) do vtk
#         vtk_point_data(vtk,dh,u)
#         vtk_save(vtk)
#         pvd[t] = vtk
#     end
# end
# vtk_save(pvd);

rhs = ODEFunction(stokes!, mass_matrix=M; jac=stokes_jac!, jac_prototype=jac_sparsity)
# rhs = ODEFunction(stokes!, mass_matrix=OrdinaryDiffEq.I; jac=stokes_jac!, jac_prototype=jac_sparsity)
# rhs = ODEFunction(stokes!, mass_matrix=M; jac_prototype=jac_sparsity)
problem = ODEProblem(rhs, u₀, (0.0,T), p);

struct FreeDofErrorNorm
    ch::ConstraintHandler
end
(fe_norm::FreeDofErrorNorm)(u::Union{AbstractFloat, Complex}, t) = DiffEqBase.ODE_DEFAULT_NORM(u, t)
(fe_norm::FreeDofErrorNorm)(u::AbstractArray, t) = DiffEqBase.ODE_DEFAULT_NORM(u[fe_norm.ch.free_dofs], t)


mutable struct FerriteBackslash
end
OrdinaryDiffEq.NonlinearSolve.LinearSolve.needs_concrete_A(::FerriteBackslash) = true
function SciMLBase.solve!(cache::OrdinaryDiffEq.NonlinearSolve.LinearSolve.LinearCache, alg::FerriteBackslash; kwargs...)
    @unpack A,b,u = cache
    if verbose == true
        println("solving Ax=b")
    end
    apply_zero!(A,b,p.ch)
    u .= A \ b
    @show u
    return u
end
# timestepper = ImplicitEuler(linsolve = FerriteBackslash(), step_limiter! = ferrite_limiter!)
# timestepper = ImplicitEuler(linsolve = FerriteBackslash(), nlsolve=NonlinearSolveAlg(OrdinaryDiffEq.NonlinearSolve.NewtonRaphson(linsolve=FerriteBackslash())), step_limiter! = ferrite_limiter!)
timestepper = ImplicitEuler(linsolve = UMFPACKFactorization(reuse_symbolic=false),step_limiter! = ferrite_limiter!)

integrator = init(
    problem, timestepper, initializealg=NoInit(), dt=Δt₀,
    adaptive=false, abstol=1e-5, reltol=1e-4,
    progress=true, progress_steps=1,
    saveat=Δt_save, verbose=true,
    internalnorm=FreeDofErrorNorm(ch)
);

pvd = paraview_collection("stokes.pvd");
integrator = TimeChoiceIterator(integrator, 0.0:Δt_save:T)
for (u,t) in integrator
    # We ignored the Dirichlet constraints in the solution vector up to now,
    # so we have to bring them back now.
    #+
    vtk_grid("stokes-$t.vtu", dh) do vtk
        vtk_point_data(vtk,dh,u)
        vtk_save(vtk)
        pvd[t] = vtk
    end
end
vtk_save(pvd);
# First we load Ferrite and some other packages we need
using Ferrite, SparseArrays, BlockArrays, LinearAlgebra, UnPack, LinearSolve
# Since we do not need the complete DifferentialEquations suite, we just load the required ODE infrastructure, which can also handle
# DAEs in mass matrix form.
using OrdinaryDiffEq

####################################################################################################################################################################################
######################################################################### Ferrite Stuff To Get Matrices ############################################################################
####################################################################################################################################################################################

# x_cells = round(Int, 55/3)                  #hide
# y_cells = round(Int, 41/3)                  #hide
x_cells = round(Int, 5)                  #hide
y_cells = round(Int, 5)                  #hide
grid = generate_grid(Quadrilateral, (x_cells, y_cells), Vec{2}((0.0, 0.0)), Vec{2}((0.55, 0.41)));   #hide

# ### Function Space
# To ensure stability we utilize the Taylor-Hood element pair Q2-Q1.
# We have to utilize the same quadrature rule for the pressure as for the velocity, because in the weak form the
# linear pressure term is tested against a quadratic function.
ip_v = Lagrange{RefQuadrilateral, 1}()
qr = QuadratureRule{RefQuadrilateral}(2)
cellvalues_v = CellValues(qr, ip_v);

dh = DofHandler(grid)
add!(dh, :v, ip_v)
close!(dh);

# ### Boundary Conditions
ch = ConstraintHandler(dh);

∂Ω_inflow = getfaceset(grid, "left");
uᵢₙ(x,t) = 1.5/(1+exp(-2.0*(t-2.0)))  
inflow_bc = Dirichlet(:v, ∂Ω_inflow, uᵢₙ)
add!(ch, inflow_bc);
close!(ch)
update!(ch, 0.0);

# ### Linear System Assembly
# Next we describe how the block mass matrix and the Stokes matrix are assembled.
#
# For the block mass matrix $M$ we remember that only the first equation had a time derivative
# and that the block mass matrix corresponds to the term arising from discretizing the time
# derivatives. Hence, only the upper left block has non-zero components.
function assemble_mass_matrix(cellvalues_v::CellValues, M::SparseMatrixCSC, dh::DofHandler)
    ## Allocate a buffer for the local matrix and some helpers, together with the assembler.
    n_basefuncs_v = getnbasefunctions(cellvalues_v)
    Mₑ = zeros(n_basefuncs_v, n_basefuncs_v)

    ## It follows the assembly loop as explained in the basic tutorials.
    mass_assembler = start_assemble(M)
    for cell in CellIterator(dh)
        fill!(Mₑ, 0)
        Ferrite.reinit!(cellvalues_v, cell)

        for q_point in 1:getnquadpoints(cellvalues_v)
            dΩ = getdetJdV(cellvalues_v, q_point)
            ## Remember that we assemble a vector mass term, hence the dot product.
            ## There is only one time derivative on the left hand side, so only one mass block is non-zero.
            for i in 1:n_basefuncs_v
                φᵢ = shape_value(cellvalues_v, q_point, i)
                for j in 1:n_basefuncs_v
                    φⱼ = shape_value(cellvalues_v, q_point, j)
                    Mₑ[i,j] += φᵢ * φⱼ * dΩ
                end
            end
        end
        assemble!(mass_assembler, celldofs(cell), Mₑ)
    end

    return M
end;

function assemble_heat_matrix(cellvalues_v::CellValues, K::SparseMatrixCSC, dh::DofHandler)
    ## Again, some buffers and helpers
    n_basefuncs_v = getnbasefunctions(cellvalues_v)
    Kₑ = zeros(n_basefuncs_v, n_basefuncs_v)

    ## Assembly loop
    stiffness_assembler = start_assemble(K)
    for cell in CellIterator(dh)
        ## Don't forget to initialize everything
        fill!(Kₑ, 0)

        Ferrite.reinit!(cellvalues_v, cell)

        for q_point in 1:getnquadpoints(cellvalues_v)
            dΩ = getdetJdV(cellvalues_v, q_point)
            # Assemble local viscosity block of $A$
            #+
            for i in 1:n_basefuncs_v
                ∇φᵢ = shape_gradient(cellvalues_v, q_point, i)
                for j in 1:n_basefuncs_v
                    ∇φⱼ = shape_gradient(cellvalues_v, q_point, j)
                    Kₑ[i,j] -= ∇φᵢ ⋅ ∇φⱼ * dΩ
                end
            end
        end

        ## Assemble `Kₑ` into the Stokes matrix `K`.
        assemble!(stiffness_assembler, celldofs(cell), Kₑ)
    end
    return K
end;

# ### Solution of the semi-discretized system via DifferentialEquations.jl
# First we assemble the linear portions for efficiency. These matrices are
# assumed to be constant over time.
T = 1.0
Δt₀ = 0.1
Δt_save = 0.025

M = create_sparsity_pattern(dh);
M = assemble_mass_matrix(cellvalues_v, M, dh);

K = create_sparsity_pattern(dh);
K = assemble_heat_matrix(cellvalues_v, K, dh);

u₀ = zeros(ndofs(dh))
apply!(u₀, ch);
jac_sparsity = create_sparsity_pattern(dh);


####################################################################################################################################################################################
################################################################################# Unrolled Solver ##################################################################################
####################################################################################################################################################################################

# @show Matrix(M .- Δt₀*K)
@show cond(Matrix(M .- Δt₀*K))

u = copy(u₀)
un = copy(u₀)
du = copy(u₀)
pvd = paraview_collection("heat-debug.pvd");
for t in 0.0:Δt₀:T
    # Setup linear system
    A = M .- Δt₀*K
    # apply!(A, ch)
    # Setup residual
    r = M*un
    # Inner solve
    u .= A \ r
    # @show u
    # Rate update
    du .= K*u #heat!(du,u,p,t)
    # @show res = norm(M*(u - un)/Δt₀ - du)
    # Update solution
    un .= u
    # Write back
    vtk_grid("heat-debug-$t.vtu", dh) do vtk
        vtk_point_data(vtk,dh,u)
        vtk_save(vtk)
        pvd[t] = vtk
    end
end
vtk_save(pvd);

####################################################################################################################################################################################
######################################################################## OrdinaryDiffEq Utils ##################################################################################
####################################################################################################################################################################################

struct RHSparams
    K::SparseMatrixCSC
    ch::ConstraintHandler
    dh::DofHandler
    cellvalues_v::CellValues
    u::Vector
end
p = RHSparams(K, ch, dh, cellvalues_v, copy(u₀))

struct FreeDofErrorNorm
    ch::ConstraintHandler
end
(fe_norm::FreeDofErrorNorm)(u::Union{AbstractFloat, Complex}, t) = DiffEqBase.ODE_DEFAULT_NORM(u, t)
(fe_norm::FreeDofErrorNorm)(u::AbstractArray, t) = DiffEqBase.ODE_DEFAULT_NORM(u[fe_norm.ch.free_dofs], t)

mutable struct FerriteBackslash
end
OrdinaryDiffEq.NonlinearSolve.LinearSolve.needs_concrete_A(::FerriteBackslash) = true
function SciMLBase.solve!(cache::OrdinaryDiffEq.NonlinearSolve.LinearSolve.LinearCache, alg::FerriteBackslash; kwargs...)
    @unpack A,b,u = cache
    println("solving Ax=b")
    apply_zero!(A,b,p.ch)
    u .= A \ b
    @show u
    return u
end

function ferrite_limiter!(u, integrator, p, t)
    update!(p.ch, t)
    apply!(u, p.ch)
end

####################################################################################################################################################################################
######################################################################## OrdinaryDiffEq with Mass ##################################################################################
####################################################################################################################################################################################

function heat!(du,u_uc,p::RHSparams,t)
    @show "rhs",t 
    @unpack K,ch,dh,cellvalues_v,u = p

    u .= u_uc
    update!(ch, t)
    apply!(u, ch)

    mul!(du, K, u) # du .= K * u

    Ferrite.compute_dirichlet_rates!(du, ch, t)
    # apply_zero!(du, ch)
end;

function heat_jac!(J,u,p,t)
    @show "jac", t

    @unpack K = p

    J .= K

    # apply!(J, ch)
end;

rhs = ODEFunction(heat!, mass_matrix=M; jac=heat_jac!, jac_prototype=jac_sparsity)
# rhs = ODEFunction(heat!, mass_matrix=OrdinaryDiffEq.I; jac=heat_jac!, jac_prototype=jac_sparsity)
# rhs = ODEFunction(heat!, mass_matrix=M; jac_prototype=jac_sparsity)
problem = ODEProblem(rhs, u₀, (0.0,T), p);

# timestepper = ImplicitEuler(linsolve = FerriteBackslash(), step_limiter! = ferrite_limiter!)
timestepper = ImplicitEuler(linsolve = FerriteBackslash(), nlsolve=NonlinearSolveAlg(OrdinaryDiffEq.NonlinearSolve.NewtonRaphson(autodiff=OrdinaryDiffEq.AutoFiniteDiff(), linsolve=FerriteBackslash())), step_limiter! = ferrite_limiter!) # Errors during AD, but jac is given
# timestepper = ImplicitEuler(step_limiter! = ferrite_limiter!)

# integrator = init(
#     problem, timestepper, initializealg=NoInit(), dt=Δt₀,
#     adaptive=false, abstol=1e-5, reltol=1e-4,
#     progress=true, progress_steps=1,
#     saveat=Δt_save, verbose=true,
#     internalnorm=FreeDofErrorNorm(ch)
# );
# @show integrator.u
# pvd = paraview_collection("heat.pvd");
# integrator = TimeChoiceIterator(integrator, 0.0:Δt_save:T)
# for (u,t) in integrator
#     # We ignored the Dirichlet constraints in the solution vector up to now,
#     # so we have to bring them back now.
#     #+
#     vtk_grid("heat-$t.vtu", dh) do vtk
#         vtk_point_data(vtk,dh,u)
#         vtk_save(vtk)
#         pvd[t] = vtk
#     end
# end
# vtk_save(pvd);

solve(
    problem, timestepper, initializealg=NoInit(), dt=Δt₀,
    adaptive=true, abstol=1e-5, reltol=1e-4,
    progress=true, progress_steps=1,
    verbose=true,
    internalnorm=FreeDofErrorNorm(ch)
);

@termi-official
Copy link
Member Author

Thanks for the fix Maxi!

@termi-official termi-official marked this pull request as ready for review May 17, 2024 16:31
Copy link
Member

@koehlerson koehlerson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# At the time of writing this [no Hessenberg index 2 initialization is implemented](https://github.com/SciML/OrdinaryDiffEq.jl/issues/1019).
here is something with the intendation off but other than that LGTM

docs/src/literate-tutorials/ns_vs_diffeq.jl Outdated Show resolved Hide resolved
docs/src/literate-tutorials/ns_vs_diffeq.jl Outdated Show resolved Hide resolved
docs/src/literate-tutorials/ns_vs_diffeq.jl Outdated Show resolved Hide resolved
docs/src/literate-tutorials/ns_vs_diffeq.jl Outdated Show resolved Hide resolved
termi-official and others added 4 commits May 17, 2024 19:02
Co-authored-by: Maximilian Köhler <maximilian.koehler@ruhr-uni-bochum.de>
Co-authored-by: Maximilian Köhler <maximilian.koehler@ruhr-uni-bochum.de>
@termi-official
Copy link
Member Author

termi-official commented May 17, 2024

Should be good to go if CI does not scream at me again.

Wait.

@koehlerson koehlerson mentioned this pull request May 21, 2024
@termi-official
Copy link
Member Author

I am waiting for another round of fixes in OrdinaryDiffEq.jl, because adaptivity and interpolations are still broken (I just commented both out for now). After these are in one of the next releases we can update Manifest again and merge.

@termi-official
Copy link
Member Author

Okay this one is ready to merge!

@koehlerson koehlerson merged commit 2051ff5 into mk/nsgmsh May 27, 2024
8 of 9 checks passed
@koehlerson koehlerson deleted the do/nsgmsh-fix branch May 27, 2024 06:06
@fredrikekre
Copy link
Member

For my own interest, could you comment on what compute_dirichlet_rates is (was? it doesn't seem included in the end) and what changes upstream were required for this? What is the limiter for example?

@termi-official
Copy link
Member Author

For my own interest, could you comment on what compute_dirichlet_rates is (was? it doesn't seem included in the end) and what changes upstream were required for this?

Most solvers in DifferentialEquations.jl want to have a function of the form $f(\d_t u(t), u(t), p, t)$ as input. This function computes the rate $\d_t u$ given the parameters $p$ and current solution $u(t)$ at time $t$. The idea to provide broad support was to compute the rate for the constrained dofs and pass them into the function directly. However, this is a bit more tricky than anticipated, because the of the influence of the mass matrix. Since I could not find a way to do it this way I just gave up without spending a significant amount of time trying to figure out how to do this.

What is the limiter for example?

The limiter is actually a thing from CFD when dealing e.g. with shockwaves. Basically you apply a problem-specific function limiter(u,p,t) which modifies u in a way that limits it's slope spatially. We usually have 2 types of limiters, stage and step limiter. Step limiters are applied at the end of the time step and stage limiters are applied at the end of (Runge-Kutta) stages. We abuse this API to enforce our Dirichlet conditions in all stages and steps. In addition to make inner solvers work we also eliminate the Dirichlet conditions from the inner linear system by passing the eliminated Jacobian manually. I think we are not able to eliminate the RHS correctly - however the limiters take care of doing this after the solve. Does that explain it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants