How to mimick a Discrete Bayes Net in Turing.jl?

Hi,

I’ve been using Turing.jl for various purposes, and recently I thought it might be easy to mimick inference from a basic discrete bayesian network like I could do with BayesNets.jl like this for example:

bn = DiscreteBayesNet()
push!(
    bn, 
    DiscreteCPD(
        :a, # our root node
        [0.6, 0.4], # prior
    )
)
push!(
    bn,
    DiscreteCPD(
        :b,
        [:a], # :a is :b's parent node
        [2],
        [
            Categorical([0.9, 0.1]),
            Categorical([0.1, 0.9]),
        ],
    )
)

infer_df = DataFrame(infer(
    bn,
    :a,
    evidence = Assignment(
        :b => 1, # observing a specific state of :b, infer :a's new distribution
    ),
))

Which gives updated probabilities for :a:

2×2 DataFrame
 Row │ a      potential 
     │ Int64  Float64   
─────┼──────────────────
   1 │     1  0.931034
   2 │     2  0.0689655

I’ve been toying around with a Turing model for a while now, trying to combine Dirichlet and Categorical distributions etc, but I can’t seem to figure out a way to infer the correct posterior probabilities from some evidence (here, a single observation of :b). I’ve been able to get the probabilities to go in the correct direction, but not to approximate the exact inference result through sampling with Turing.

I guess I’m going at it wrong, probably. I’m not an expert with probabilistic programming. On the other hand, Turing seems so flexible that I guess it should be possible, though I’m not sure. Would you have an example of how to do something like this in Turing?

Welcome to the forum! This is an interesting question, I’ve been thinking about a similar thing. Could you share your attempts?