Hands-on Tutorials
This article introduces a pragmatic coding pattern to model and analyse system. This is illustrated by an realistic example. The example is developed in full, including the full code listing. This is intended to be a quick-start guide for system architects and engineers.
Code availability. An executable Jupyter notebook for the Analysis is available on Github. It can be accessed via nbviewer.
Introduction
Functional modeling is an established technique to model engineered systems. The modeled system is viewed as a set of interconnected functions. Each function is transforming inputs into outputs, with the behavior possibly dependent upon parameters. Graphically this is typically depicted by a functional block diagram.
To analyze quantitatively such functional block diagram, an executable mathematical representation is needed. Python is well suited for this task. Not just for coding and executing the mathematical model, but actually performing the analysis, plotting the results, and sharing the work via a reproducible computing environment. This article introduces a coding pattern in Python to concisely model a functional block diagram as part of a quantitative system analysis. The coding pattern is applied in the context of a simple engineering analysis: the analysis of the error sources in a signal transformation chain.
Example of a functional chain
We restrict ourselves to the case of unidirectional functional chain without feedback loops. Feedback loops introduce dynamic effects requiring an extension of the pattern – this will be covered in a follow-up article. Consider the following illustrative functional block diagram.

It shows a transformation chain from polar coordinates (θi, A) to Cartesian coordinates (u,v) and back. The caveat is that there are some error sources in the chain, highlighted in red. For the sake of the example, assume that the analysis aims at quantifying the final angle error resulting from the various error sources. This example is inspired from the author professional experience as an architect for magnetic sensors [1]. The underlying principles are generic.
Assume that the polar-to-Cartesian transformation suffers from offsets. These are fixed parameters for one analysis run. Assume further that the Cartesian coordinates are polluted by noise with random samples drawn from a normal distribution with a given standard deviation (another fixed parameter). The final functional block simply re-calculates the output angle with the 2-argument arctangent (np.atan2 in Numpy). Intuitively, if the offsets and noise are relatively small with respect to the amplitude A, the angle error θe should be small.
Function definition in Python
For each functional block, we define a mathematical function relating the outputs to the inputs: Y= f(X, P). In general, the inputs or outputs could be multi-dimensional variables, hence the use of the uppercase symbols. Among the function inputs, we distinguish the variable signals X flowing between blocks, and the parameters P with fixed values set by the design or the environmental conditions. These parameters are constant for a given analysis run. A straightforward corresponding function definition in Python for the polar-to-Cartesian transformation with offset errors could be:
def f(A, theta_i, offset_u, offset_v):
return (A * np.array([np.cos(theta_i), np.sin(thetat_i)])
+ np.array([offset_u, offset_v]))
Note that the above definition does not distinguish between variable inputs and fixed parameters. It also returns anonymous outputs: the two outputs are simply grouped in an array without labeling. Such coding scheme is a good starting point but lacks modularity.
For modularity, we would like to be able to chain the functional blocks regardless of their function signature. To obtain a uniform function signature and allow chaining, we introduce the following coding pattern.
- First, we group all the fixed parameters in a single common data dictionary [1] dd. This makes it easier to identify the exact set of parameters used for an analysis run. For convenience, we package this data dictionary as a Pandas series to allow quick access to the underlying elements with the shorter dot notation: _dd.parameter1 (as opposed to the more verbose _dd[‘parameter1′] for a regular dictionary in Python).
- Second, we group also the variable inputs into another series X.
- Finally, the outputs are also grouped with named labels.
With this convention, the function signature is always of the form: **** Y = f(X, dd). This holds regardless of number of inputs and outputs. Here is the adapted definition for _polar2car_t:
dd=pd.Series({
'offset_u' : 0.01,
'offset_v' : 0.0,
'noise_std': 0.01,
})
def polar2cart(X, dd):
return ({
'u': X.A * np.cos(X.theta_i) + dd.offset_u,
'v': X.A * np.sin(X.theta_i) + dd.offset_v,
})
The can be invoked as follows:
polar2cart(pd.Series({'A': 1, 'theta_i': 0}), dd=dd)
At this stage, one might wonder what are the advantages of the above convention. It seems quite verbose.
The key advantage of the proposed pattern is the ease at which it can be extended to tabular data set, known as data frame in Pandas. A data frame is a concise representation of a complete simulation run with discrete-time varying signals with one row representing one snapshot of all signal values at a given time instant.

Let’s start with the primary inputs (the stimuli of the simulation): the input angle θi and amplitude A. Let’s generate a data frame corresponding to a complete angular sweep (from 0 to 2π) for two amplitudes A=1 and A=2.
import itertools
itertools.product([1,2], [3])
df = pd.DataFrame(itertools.product(
np.arange(0,2*np.pi,np.pi/100),
[1,2]),
columns=['theta_i', 'A'])
display(df)
We can now wrap our previous function (in Python this is called "decorating" the function) such that it operates with data frame inputs and outputs. This can be done by applying the function to each row, and joining the outputs and inputs data frames. With this coding pattern, executing any function f on a tabular data set of input variables is invoked simply with: df=f(df, dd).
Here is the definition of the polar2cart function, this time wrapped to become data-frame friendly and its invocation.
def apply_func_df(func):
def wrapper(df, dd):
Y = df.apply(func, dd=dd, axis=1, result_type='expand')
return df.drop(columns=Y.columns, errors='ignore').join(Y)
wrapper.__wrapped__ = func
return wrapper
@apply_func_df
def polar2cart(X, dd):
return {
'u': X.A * np.cos(X.theta_i) + dd.offset_u,
'v': X.A * np.sin(X.theta_i) + dd.offset_v,
}
df=polar2cart(df, dd)
display(df)
Operations on data frames
with all variables in a single data frame, we can exploit the extensive capabilities of the Pandas library for quick inline operations. For example, a moving averaging could be easily applied:
df[['u_avg', 'v_avg']]=df[['u', 'v']].rolling(2, center=True).mean()
Simple operations like that can be performed directly on the full data frame without the need for invoking our decorator. This is because the built-in broadcasting mechanism in Pandas, which extend the arithmetic operations to whole column by default. For example, to add noise we can define the following function bypassing our decorator:
def add_noise(df, dd):
df[['u', 'v']] += np.random.randn(len(df),2)*dd.noise_std
return df
df=add_noise(df, dd)
For complex operations (e.g. with control logic, or diverse outputs), our proposed coding pattern remains valid: define an elementary function and wrap it with our decorator.
Modeling the rest of the chain
Going back to the functional chain, we still need to model the angle calculation block. We can also calculate the the angle error accounting for the phase wrapping (-π = +π in terms of angle). Using our coding pattern, we define the two extra functions _calcangle and _calc_angleerr as follows:
@apply_func_df
def calc_angle(X, dd):
return {
'theta_o': np.arctan2(X.v, X.u)
}
@apply_func_df
def calc_angle_err(X, dd):
e = X.theta_o - X.theta_i
# account for phase wrapping
if e > np.pi:
e-=2*np.pi
elif e < -np.pi:
e+=2*np.pi
return {
'theta_err': e
}
A final cosmetic operation could be the conversion from radians to degrees for easier interpretation of the plots. As this is a single Numpy operation, we can directly operate on data frame columns. We operate in place on all columns whose name contains _theta__.
def convert_todeg(df, dd): df[df.filter(like=’theta‘).columns]=np.degrees( df[df.filter(like=’theta_’).columns]) return df
## Complete pipeline
The functions for all functional blocks have been defined, and wrapped when needed to make them all data frame compatible. We can now chain all operations using the pipe [3] operator as follows:
df=(df .pipe(f1, dd) .pipe(f2, dd))
The complete workflow is illustrated in the figure below.
- (a) We start from a functional model with unidirectional signal flow.
- (b) We model each functional block with a python function _Y=f(X, dd)_. We wrapped this function, if needed, such that the wrapped function operates directly on a data frame in-place.
- (c) We populate a data frame with the primary outputs.
- (d) We call each function in turns in a pipeline with the same order as the signal flow. This expands progressively the data frame.

Summary statistics are also readily calculated as we have all signal values in a single data frame. Below we show how we can easily summarize the results in a pivot table by extracting the the root mean square (RMS) error for the two different amplitude. We finally plot the key curves by calling _df.plot(...)_ or _df.hvplot_ for an interactive version with Holoviews [4].

## Conclusions
We presented a generic coding pattern suitable to implement an executable model in Python of a unidirectional block diagram. The pattern is based on elementary functions wrapped such that they work directly on tabular data sets such as Pandas data frames. An end-to-end simulation is then invoked simply by chaining these operations starting from the stimuli. We demonstrated this coding pattern on a quantitative system analysis to illustrate the advantages. The key advantages are:
1. We leverage the extensive data frame operations built-in into Pandas (slicing, filtering, query, pivot table, plots...).
2. We obtain a modular executable model directly traceable to the block diagram.
---
## References
[1] N. Dupré, Y. Bidaux, O. Dubrulle, and G. Close, "A Stray-Field-Immune Magnetic Displacement Sensor with 1% Accuracy," IEEE Sens. J., May 2020. Available: [https://doi.org/10.1109/JSEN.2020.2998289](https://doi.org/10.1109/JSEN.2020.2998289)
[2] Mathworks, "What is a Data Dictionary?". Mathworks Help Center. Available: [https://ch.mathworks.com/help/simulink/ug/what-is-a-data-dictionary.html](https://ch.mathworks.com/help/simulink/ug/what-is-a-data-dictionary.html). Accessed: 24 Jan 2021.
[3] B. Chen, "Using Pandas Method Chaining to improve code readability," Towards Data Science, Aug 2020. Available: [https://towardsdatascience.com/using-pandas-method-chaining-to-improve-code-readability-d8517c5626ac](https://towardsdatascience.com/using-pandas-method-chaining-to-improve-code-readability-d8517c5626ac)
[4] A. Rilley, "Advanced Data Visualization in Python with HoloViews," Towards Data Science, Jun 2020. Available: [https://towardsdatascience.com/advanced-data-visualization-with-holoviews-e7263ad202e](https://towardsdatascience.com/advanced-data-visualization-with-holoviews-e7263ad202e)
<script src="https://gist.github.com/closega/ce1b25e838071e07fbb3ea0dcf6d8112.js"></script>