Although the project is primarily for Integro-Difference equation models, the file filter_and_smoother_functions provides functions applicable for any discrete dynamical system. This page provides a very simple example to test the kalman filter, the information filter, and fitting with the corresponding likelihoods.
The error terms are mutually independant and have variances \(\sigma^{2}_\epsilon=0.02\) and \(\sigma^{2}_{\eta}=0.03\) and \(\vec z_t\) are transformed linear ‘observations’ of \(\vec\alpha\)
The process, \(\alpha\), simply spins in a circle with some noise. Lets simulate from this system;
Code
import jaximport jax.numpy as jnpjax.config.update("jax_enable_x64",False)import jaxidem.filters as filtimport matplotlib.pyplot as pltimport plotly.express as pximport plotly.io as pioimport plotly.graph_objs as goimport pandas as pdimport pickle
We can see how the process is an odd random spiral, and the observations are skewed noisy observations of this in 3D space
With filtering, we aim to recover the process {fig-truth} from the observations {fig-obs}. We do this with two ‘forms’ of the filter, which should be equivalent.
2 Kalman filter
Code
m_0 = jnp.zeros(2)P_0 =100*jnp.eye(2)# We need to give the dimensions of both sigma2_eps and sigma2_eta.# Since our errors are all iid, this is 0sigma2_eps_dim = sigma2_eps.ndimsigma2_eta_dim = sigma2_eta.ndimfilt_results = filt.kalman_filter(m_0, P_0, M, PHI, sigma2_eta, sigma2_eps, zs_tree, sigma2_eps_dim = sigma2_eps_dim, sigma2_eta_dim = sigma2_eta_dim, forecast=0, likelihood='full' )ms_df = pd.DataFrame(list(filt_results['ms']), columns = ["x", "y"])combined_df = pd.concat([alphas_df.assign(line='True Process'), ms_df.assign(line='Filtered Process Means')])# Creating the line plot with custom colorsfig3 = px.line(combined_df, x='x', y='y', color='line', color_discrete_sequence=['blue', 'red'], height=200) # Specify colors herefig3.update_layout(xaxis=dict(scaleanchor="y", scaleratio=1, title='X Axis'), yaxis=dict(scaleanchor="x", scaleratio=1, title='Y Axis'))# Show the plotfig3.show()
Figure 3
2.1 Pseudo-information filter?
This is a test.
Code
i = PHI.T @ zs / sigma2_epsI = PHI.T @ PHI / sigma2_epsfilt_results_alt = filt.kalman_filter(m_0, P_0, M, I, sigma2_eta, I, i, sigma2_eps_dim =2, sigma2_eta_dim = sigma2_eta_dim, forecast=0, likelihood='full' )ms_df_alt = pd.DataFrame(list(filt_results_alt['ms']), columns = ["x", "y"])combined_df = pd.concat([alphas_df.assign(line='True Process'), ms_df_alt.assign(line='Filtered Process Means')])# Creating the line plot with custom colorsfig_alt = px.line(combined_df, x='x', y='y', color='line', color_discrete_sequence=['blue', 'red'], height=200) # Specify colors herefig_alt.update_layout(xaxis=dict(scaleanchor="y", scaleratio=1, title='X Axis'), yaxis=dict(scaleanchor="x", scaleratio=1, title='Y Axis'))# Show the plotfig_alt.show()print(filt_results_alt["ll"])
Similarily, we can apply the information filter. Since the information filter function has support for time varying observation shapes, there is a little more work to do.
Code
Q_0 =0.01* jnp.eye(2) nu_0 = jnp.zeros(2)PHI_tuple =tuple([PHI for _ inrange(T)])zs_tuple = zs_treefilt_results2 = filt.information_filter(nu_0, Q_0, M, PHI_tuple, sigma2_eta, [sigma2_eps for _ inrange(T)], zs_tuple, sigma2_eps_dim = sigma2_eps_dim, sigma2_eta_dim = sigma2_eta_dim, forecast =0, likelihood='full' )ms2 = jnp.linalg.solve(filt_results2['Qs'], filt_results2['nus'][..., None]).squeeze(-1)ms2_df = pd.DataFrame(list(ms2), columns = ["x", "y"])combined_df_2 = pd.concat([alphas_df.assign(line='True Process'), ms2_df.assign(line='Filtered Process Means')])# Creating the line plot with custom colorsfig4 = px.line(combined_df_2, x='x', y='y', color='line', color_discrete_sequence=['blue', 'green'], height=200) # Specify colors herefig4.update_layout(xaxis=dict(scaleanchor="y", scaleratio=1, title='X Axis'), yaxis=dict(scaleanchor="x", scaleratio=1, title='Y Axis'))# Show the plotfig4.show()
Figure 5
We can clearly see that the green and red lines are the same; if plotted over eachother, they overlap. They only differ slightly due to numerical error.
We still need to look at the log-likelihoods that were outputted;
Let’s ‘forget’ some of the details of the original model; the propegation matrix \(M\) and the variances \(\sigma^2_\epsilon\) and \(\sigma^2_\eta\) (though we keep the assumption that everything is independant)