Team:Heidelberg/pages/Enzyme Modeling detailed

From 2014.igem.org

Revision as of 03:53, 18 October 2014 by Jan glx (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Contents

The long way to the model

For long time we have thought it would be easy to extract the relevant data out of our lysozyme assays. But this though somehow perished one week before wiki freeze. For showing that science is always trying and failing, we wanted to explain how we came up with the upper model in the next part.

First try, easy fitting

The first thought when looking to the curves was that the reaction was clearly exponentially with some basal substrate decay and a small offset due to the different types of proteinmix added. The relevant parameter for us would be only the exponente which would be equal to some constant k times the enzyme concentration present. We would assume, that the constant doesn't change after heatshock, but the part of enzyme that survived. This assumption is based on a paper by Di Paolo et al., who claim that for pH denaturation of $\lambda$-lysozyme there are only two transition states, folded and unfolded. [1] This way curves for the temperature decay can be measured for each kind of lysozyme and finally compared to each other.

A basic problem of this method was, that it could never be excluded, that the temperature behaviour is not due to some initial concentration effects and that is why we chose to try Michaelis menten fitting, as in a perfect case, one could make an estimation on the amount of enzyme in the sample. As the exponential is just a special case of Michaelis Menten, one can always enlarge the model with this contribution.

Fitting Michaelis-Menten kinetics to the concentration data

As we are screening different lysozymes in high-throughput we tried to use the whole data obtained from substrate degredation over time by applying integrated michaelis menten equation [2] But as there is always an OD shift because of the plate and the cells lysate we need to take this parameter into account while fitting.

The basic differential equation for Michaelis-Menten kinetics [3] is:

\[\frac{d\left[S\right]}{dt} = \frac{- V _{max} \left[S\right]}{K_m + \left[S\right]} \]

Where $\left[S\right]$ means substrate concentration at time 0 or t respectively, $ V _{max}$ is the maximum enzyme reaction velocity, $K_m$ is Michaelis-Menten constant and t is time. This leads to:

\[ \frac{K_m + \left[S\right]}{\left[S\right]} \frac{d\left[S\right]}{dt} = -V_{max}\]

Which we can now solve by separation of variables and integration.

\[ \int_{\left[S\right]_0}^{\left[S\right]_t} \left(\frac{K_m }{\left[S\right]} + 1\right) d\left[S\right] = \int_{0}^{t}-V_{max} dt'\]

This leads to,

\[K_m \ln\left( \frac{\left[S\right]_t}{\left[S\right]_0} \right) + \left[S\right]_t - \left[S\right]_0 = -V_{max} t \]

what we reform to a closed functional behaviour of time.


\[t = - \frac{K_m \ln\left( \frac{\left[S\right]_t}{\left[S\right]_0} \right) + \left[S\right]_t - \left[S\right]_0 } {V_{max}} \]

As the functional behaviour is monotonous we can just fit this function to our data, which should directly provide us with $V_max$ which is the interesting parameter for us.

As OD in the range where we are measuring still is in the linear scope with some offset due to measurement circumstances and the absorption of the cells lysate we can use $\left[S\right] = m OD + a$, where a is the offset optical density, when all the substrate has been degraded, OD is the optical density at 600 nm and m is some parameter that needs to be calibrated.

So we can refine the functional behaviour as:


\[t = - \frac{K_m \ln\left( \frac{ m ({\mathit{OD}}_t - a)}{ m (\mathit{OD}_0 - a)} \right) + m \mathit{OD}_t - m \mathit{OD}_0 } {V_{max}} \]


$OD_0$ is just the first measured OD we get, this parameter is not fitted.

But these fits did completely not converge so we needed to find another solution. Liao et al. proposed and compared different techniques for fitting integrated Michaelis-Menten kinetics, [4] which didn't work out for us, because starting substrate concentration was too low for us and thus the fits converged to negative $K_m$ and $V_Max$. Finally we fitted Michaelis-Menten kinetics using the method proposed by Goudar et al. [5] by directly fit the numerically solved equation, using lambert's $\omega$ function, which is the solution to the equation $ \omega (x) \exp (\omega(x)) = x$. So we fitted

\[ \left[S\right]_t = K_m \omega \left[ \left( \left[A\right]_0 / K_m \right) \exp \left( \left[A\right]_0 / K_m -V_{max} t / K_m \right) \right] \]

This worked well, the fits converged reliably, but sometimes produced huge errors for $K_m$ and $V_Max$ of the order of $10^5$ higher than the best fit for these values. This simply meant, that from most data, these parameters could not be identified. On the other hand simple exponential fit reproduced the data nearly perfectly, which made us concluding, that we're just working in the exponential regime, because $K_m$ is just much too high for the substrate concentrations we're working with, so that the differential equation from the beginning would transform into:

\[\frac{d\left[S\right]}{dt} = - V _{max} \left[S\right] \]

which is solved by a simple exponential equation

\[ \left[S\right]_{t} = \left[S\right]_0 e^{\left( - V_{max} t \right)} \]

As we're measuring OD function fitted to the data results in:

\[ \mathit{OD}_{t} = \left(\mathit{OD}_0 - a\right) e^{\left( - V_{max} t \right)} + a \]

with a a parameter for the offset in OD due to the plate and the proteinmix.

This method seemed to be the method of choice, as it also produced nice results. We have written a python skript that handled all the data, the plotting and the fitting and in the end produced plots with activities normalized to the 37°C activity. These results though had too large errorbars, so we tried to set up a framework to fit multiple datasets in one, with different parameters applied to different datasets. We chose to work with the widely used d2d arFramework developed by Andreas Raue [6] running on MATLAB. As all the datahandling had already happened in python we appended the script with the generation of work for the d2d framework, so that our huge datasets could be fitted at once. The fitting worked out quite well, but some strange results could not be explained yet with that.

Modeling product inhibition

We observed many different phenomena we could not explain properly. For example when the activity at 37°C started low, it seemed, that the protein doesn't loose it's activity after heatshock. This meant that there was some kind of basal activity, independent of the enzyme concentration. On the other hand activity was not completely linear to enzyme concentration. But the most inexplicable part was, that some samples even after 1h of degradation stayed constant at an $OD_{600}$ level, nearly as high as the starting $OD_{600}$. This could only be due to the substrate not being degraded, so we checked this by adding fresh lysozyme to the substrate. We observed another decay in $OD_{600}$, which clearly meant, that not the substrate ran out, but the enzyme somehow lost activity during measurement. This meant, that our basic assumption from above was completely wrong and the results completely worthless, as we are only detecting a region of the kinetics, where already some enzyme has been lost due to inhibition. But therefore nothing about initial enzyme concentration in the sample could be said. We even found this based in a paper from 1961 written in french [7].

Grand model

We then tried to always fit one grand model to all the data we have obtained from all the different assays, with curves for different temperatures, biological replicates and technical replicates. In total these were about 100 000 data points we feeded in and up to 500 parameters we fitted. This did not work out, because the variation in starting amounts of enzyme was too large even between the technical replicates from different days. This might be due to the freeze thaw cycles, that the enzyme stock was subdued to. On the other hand this model was just way too complex to be handled easily, as it took hours only for the initial fits. Calculating the profile likelihoods took about one day. Therefore we chose to take another approach, always modeling the data of one single plate, as on that for sure the variations were much less. Of course thus different parameters would not be identifiable, for example the enzyme concentration would not be comparable between the different samples. On the other hand, the only parameters, that are interesting for our purpose, the bahavior after heatshock would still be identifiable.

References

[1] Di Paolo, A., Balbeur, D., De Pauw, E., Redfield, C. & Matagne, A. Rapid collapse into a molten globule is followed by simple two-state kinetics in the folding of lysozyme from bacteriophage λ. Biochemistry 49, 8646-8657 (2010).

[2] Hommes, F. A. "The integrated Michaelis-Menten equation." Archives of biochemistry and biophysics 96.1 (1962): 28-31.

[3] Purich, Daniel L. Contemporary Enzyme Kinetics and Mechanism: Reliable Lab Solutions. Academic Press, 2009.

[4] Goudar, Chetan T., Jagadeesh R. Sonnad, and Ronald G. Duggleby. "Parameter estimation using a direct solution of the integrated Michaelis-Menten equation." Biochimica et Biophysica Acta (BBA)-Protein Structure and Molecular Enzymology 1429.2 (1999): 377-383.

[5] Liao, Fei, et al. "The comparison of the estimation of enzyme kinetic parameters by fitting reaction curve to the integrated Michaelis–Menten rate equations of different predictor variables." Journal of biochemical and biophysical methods 62.1 (2005): 13-24.

[6] Raue, A. et al. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLoS ONE 8, (2013).

[7] Colobert, L. & Dirheimer G. Action du lysozyme sur un substrat glycopeptidique isolé du micrococcus lysodeiktikus. B1OCHIMICA ET BIOPHYSICA ACTA, 54, 455-468 (1961)