# Best Practices

Here, we provide an overview of some of what we believe are the best practices for free energy calculations, with references, wherever possible. This document assumes you already have a basic idea of what these calculations are and what they do (if not, you can learn more from the Free Energy Fundamentals section), and that you already have a working knowledge of molecular simulations and basic terms like convergence and equilibration (if not, start with a textbook like Leach's Molecular Modelling: Principles and Applications). Simulation-package specific issues will not be addressed here.

Feel free to edit this document or the attached discussion pages. We hope this to be a place storing the community consensus on these issues. Do back up any edits with appropriate references.

## Introduction

Free energy calculations are appealing, in that, in principle, they allow calculation of rigorously correct free energies, given a particular set of parameters and physical assumptions. We firmly believe that the goal of these calculations, then, should be to obtain these correct free energies given the particular assumptions -- and not necessarily to match experiment. Only then can the underlying parameters and physical assumptions really be tested. In other words, there exists a single right answer for a free energy calculation, given this particular set of parameters and physical assumptions, and our goal is to obtain it. Here, we will call these free energies "correct" free energies. "Correct", then, implies at the very least that the computed free energies have converged, and that there are no underlying methodological problems.

It is also important to remember in what follows that free energy is a function of state, so there are many possible choices of pathway for a thermodynamic cycle connecting the same two endpoints. Some pathways may be more efficient than others (sometimes by many orders of magnitude) so methods which are in principle correct may not always be practical.

Here, we discuss best practices in the context of several practical examples: Solvation free energies, and binding free energies. Solvation free energies provide a basic starting point for considering a number of the key issues, and, for binding free energies, there are many more choices of pathway, which raises additional complications. In both cases, there are also different methods for computing the relevant free energy differences, which may differ in efficiency.

Alchemical free energies almost involve some insertion or deletion of atoms -- both in hydration free energy calculations, and in absolute and relative binding free energy calculations. By insertion and deletion, we mean decoupling or annihilation (see Decoupling and annihilation for definitions) of the interactions of the atoms in question. We believe that a basic list of rules and guidelines should be followed in any calculation that involves insertion or deletion:

• Rule 1: Always use soft-core potentials while decoupling or annihilating Lennard-Jones interactions
• Rule 2: Never leave a partial atomic charge on an atom while its Lennard-Jones interactions are being removed
• Guideline 3: It is usually more efficient to perform electrostatic and Lennard-Jones transformations separately
• Guideline 4: Inserting or deleting atoms is usually less efficient than mutating them, so transformations should involve as few insertions and deletions as possible.
• Guideline 5: Keep configuration space in mind and think about convergence.

It is worth looking at each of these in more detail.

### Rule 1: Soft core potentials

Rule 1 -- that soft core potentials should always be used when turning on or off Lennard-Jones interactions -- really should be a rule observed by all free energy calculations, we believe. Lennard-Jones interactions between particles have a really steep ($1/r^{12}$) rise in potential energy. This prevents particles from overlapping. However, to delete particles (atoms or molecules) these interactions need to be turned off somehow, and this is not as straightforward as it might seem.

One relatively-commonly used choice (Fowler et al., 2005, Chipot, Rozanska and Dixit, 2005, AMBER, and others) for turning these off is simple linear-scaling of that term in the potential energy or Hamiltonian: That is, $V(\lambda) = (1 - \lambda) V_0 + \lambda V_1$, where $V_0$ is the potential energy with full Lennard-Jones interactions, and $V_1$ is the potential energy where the Lennard-Jones interactions have been turned off for the atoms which are being deleted. This means that, for the atoms being deleted, Lennard-Jones interactions scale at small $r$ as $(1-\lambda)/r^{12}$. This has two unfortunate and interconnected consequences. First, there is a discontinuous change in the form of the interaction potential when going from $\lambda=1-\epsilon$ (where $\epsilon$ is a very small number) to $\lambda=1$, as the $1/r^{12}$ term still is fairly important even at $\lambda$ very near 1, but is entirely turned off at $\lambda=1$. Secondly, it leads to large forces, numerical instabilities, and other problems in simulations near $\lambda=1$. Formally, it has been shown that this leads to a integrable singularity in $dV/(d\lambda)$, which means that computing correct free energies with this scheme using thermodynamic integration is impossible using numerical techniques (Mruzik et al., 1976,Mezei and Beveridge, 1986, Resat and Mezei, 1993 and especially Beutler et al., 1994, Pitera and van Gunsteren, 2002 and Steinbrecher et al., 2007 and references therein) and similar problems plague free energy perturbation schemes.

In an attempt to get around this, some have suggested scaling the potential energies with $(1-\lambda)^k$, where $k$ is an integer greater than 1. It can be shown that, for $k \gt= 4$, this leads to an integrable singularity in $dV/d\lambda$, so thermodynamic integration can in principle be done (Mezei and Beveridge, 1986, Beutler et al., 1994). But integrable singularities still pose very substantial problems for molecular simulation, and this approach can still lead to large forces, numerical instabilities and energy conservation problems (Beutler et al., 1994 and Steinbrecher et al., 2007) and make free energy differences extremely difficult to converge ([D. Mobley, unpub. data]).

Since free energies are path-independent, an elegant solution to this problem was developed (Beutler et al., 1994) – to modify the Lennard-Jones functional form to gradually smooth out the $1/r^{12}$ term as a function of $\lambda$, rather than simply multiplying it by a prefactor. This removes problems with numerical instabilities and singularities, and improves convergence properties (Beutler et al., 1994, Zacharias et al., 1994, Pitera and van Gunsteren, 2002). The basic idea is that it allows particles to gradually begin to overlap as $\lambda$ is changed, rather than saving a drastic change in interactions for the point going from $\lambda=1-epsilon\ltmath\gt to \ltmath\gt\lambda=1$. This approach is known as soft core potentials (and, alternately, "separation-shifted scaling"), and has subsequently been shown to be a nearly optimal path for modifying Lennard-Jones interactions (Blondel, 2004. In some work, several groups have further tested this approach and found a slightly modified functional form and set of parameters from that originally proposed (Beutler et al., 1994) which leads to improved efficiency for free energy calculations (Shirts and Pande, 2005, [D. Mobley, unpublished data]); we recommend that the soft core potentials and parameters from that work be employed in all free energy calculations involving insertion or deletion of particles.

Some testing has suggested that the $(1-\lambda)^k$ scaling approach may be essentially adequate for hydration free energy calculations ([D. Mobley, unpublished data],Steinbrecher et al., 2007) but it still less efficient there than soft-core potentials, so this does not affect our recommendation.

In summary: Linearly scaling Lennard-Jones interactions back as a function of $\lambda$ for insertion/deletion of particles is formally incorrect for numerical integration and leads to wrong estimates of free energy differences. While more complicated schemes involving $\lambda^k$ scaling can be formally correct, there are serious concerns regarding their accuracy. Soft-core potentials provide a rigorously correct, efficient alternative to these and should be used whenever particles are inserted or deleted, preferably with the functional form and parameters of ((Shirts and Pande, 2005), unless future work finds a still more efficient set of parameters.

### Rule 2: Turn off partial charges

Rule 2 states that a partial atomic charge should never be allowed to remain on an atom while its Lennard-Jones interactions are being removed. To understand the reason for this, consider two atoms of opposite charge, A and B. Lennard-Jones interactions of atom A are being scaled back. Regardless of the scaling scheme used, at some lambda value, atoms A and B will begin to overlap occasionally, since the final state allows A and B to overlap totally. If A has a remaining partial atomic charge when these overlaps become possible, the two point charges assigned to A and B can actually overlap as well. Since the potential energy of Coulomb interactions between point charges scales as $q_{A} q_{B}/r_{AB}$, where $r_{AB}$ is the distance between A and B, this presents significant problems when $q_A$ and $q_B$ have opposite signs. In particular, there is an infinite energy minimum at $r_{AB}=0$, so the two particles would in principle get trapped on top of one another.

In practice, what usually happens in molecular dynamics simulations in these circumstances is that the forces get extremely large as A and B begin to overlap, and the simulation will crash. Constraint algorithms are often the first to fail, so this may lead to a warning about constraints (i.e. LINCS or SHAKE) and then a crash. This issue is discussed briefly by Pitera and van Gunsteren and in more detail by Anwar and Heyes.

In view of this problem, we recommend always turning off partial charges for any atoms for which Lennard-Jones interactions are being removed before doing the Lennard-Jones transformation. Additionally, when Lennard-Jones parameters for an atom are being substantially modified during a free energy calculation (i.e. for relative free energy calculations involving mutation of an atom) and soft-core potentials are employed, similar problems may arise, so it may be useful to remove partial charges on atoms which are being mutated, as well.

Several groups have developed modified electrostatics scaling methods in an attempt to bypass this problem and allow electrostatics interactions and Lennard-Jones interactions to be turned off in only one set of calculations (for example, Anwar and Heyes), but since electrostatics transformations are usually so smooth a function of $\lambda$ and need only few $\lambda$ values for good overlap (Shirts et al., 2005; Mobley et al., 2007, and others) it is unclear that this results in any significant efficiency gain over performing the transformations separately.

In view of this, our recommendation is that either (a) partial charges on any particles being inserted or deleted be turned off prior to the use of soft core potentials for those particles, or (b) a soft core scheme for electrostatics be implemented to allow simultaneous changes.

### Guideline 3: Perform electrostatics transformations separately from Lennard-Jones

As noted in Rule 2, above, electrostatics transformations are typically smooth functions of lambda with good phase-space overlap between even coarsely-spaced lambda values(Shirts et al., 2005; Mobley et al., 2007, and others). As a consequence, these are quite efficient compared to Lennard-Jones calculations. As established above, when particles are being inserted or deleted, the electrostatic interactions of these particles should be set to zero before turning off their Lennard-Jones interactions. But what about electrostatic interactions on atoms which are merely being mutated (i.e. a change of partial charge and Lennard-Jones radius), as in relative free energy calculations?

We are not aware of any study which has looked at this in detail, but given the efficiency of free energy calculations modifying electrostatics interactions relative to those significantly modifying Lennard-Jones interactions, we believe it makes sense to perform the two sets of calculations separately. Given that the two transformations have different lambda-dependences, it might actually be less efficient to perform them together than separately. Performing them separately has an additional advantage, as well: Uncertainties in the two components can be assessed separately, and computational effort focused on reducing the largest uncertainty (i.e. by extending some simulations to get additional sampling).

Further testing should be focused in this area, to determine whether alternative scaling approaches which can modify Lennard-Jones and electrostatic interactions simultaneously (Anwar and Heyes) are actually more efficient than the approach of separate modification that we propose.

### Guideline 4: Use few insertions/deletions

Electrostatics transformations are usually smooth functions of lambda (without soft core potentials), and require few lambda values, while Lennard-Jones transformations – especially those involving insertion and deletions – are difficult transformations which require substantially more lambda values (even when using soft core potentials) to obtain good phase-space overlap and accurate free energy differences (Shirts et al., 2005; Mobley et al., 2007, Mobley et al., 2007b, and others). Thus, insertions and deletions of particles can be thought of as “difficult” transformations (i.e. Jarzynski, 2006). Consequently, it is far more computationally efficient to modify existing particles (atoms) than to insert or delete new atoms; this should be kept in mind when constructing mutation pathways for relative free energy calculations, since multiple choices of mutation pathways between a set of molecules are typically possible.

This guideline is not at all helpful for absolute free energy calculations, since these by design involve inserting or deleting entire molecules.

### Guideline 5: Think about configuration space and convergence

Given that many choices of pathway are possible, it can often be helpful to think about whether a particular choice of pathway makes convergence easier or more difficult.

For example, in absolute binding free energies, one can incorporate the standard state using either simple distance restraints between the ligand and the protein, or by restraining the ligand orientation as well. At the fully noninteracting state, the amount of configuration space the ligand will need to sample is dictated by this choice. Hence, a ligand with only a single reference distance restrained relative to the protein will need to sample a spherical shell in configuration space, while a ligand with all six relative degrees of freedom restrained would need to sample only a very small region of configuration space. These two can take drastically different amounts of time, so in fact it can be much more efficient, at least in some cases, to use the additional restraints (Mobley et al., 2006).

## Performing free energy calculations

Here, we attempt to separate methods for analyzing the results of free energy calculations, from methods for performing free energy calculations. For example, thermodynamic integration and exponential averaging (among others) are methods for analyzing free energy calculations, while equilibrium, slow-growth, and fast-growth methods can be used for performing free energy calculations. In this section, we focus on methods for performing free energy calculations, and address analysis methods below.

There are several basic groups of methods for performing free energy calculations: Slow-growth, fast-growth, and equilibrium (or instantaneous growth) free energy methods. Slow-growth and equilibrium methods are more traditional, while fast-growth (non-equilibrium) methods have gained considerable recent interest. In slow-growth methods, the coupling parameter, $\lambda$, is slowly varied over the course of one or several simulations to take a system gradually from $\lambda=0$ to $\lambda=$; from this, the free energy difference between endpoints is estimated. In equilibrium methods, on the other hand, separate simulations are run at multiple different lambda values and information from the individual simulations is used to estimate free energy differences between adjoining lambda values. Fast-growth methods are based on the demonstration by Jarzynski in 1997 that the free energy difference associated with a particular equilibrium process can be computed by taking an appropriate average of the irreversible work done in performing many (potentially rapid) non-equilibrium trials of the same process. When applied to alchemical free energy calculations, this simply amounts to estimating free energy differences by performing many different rapid versions of a slow-growth trial – that is, rapidly changing lambda between two values (i.e. 0 and 1) in many separate trials, and monitoring the work done each time. Equilibrium free energy calculations can be thought of as "instantaneous growth" as they rely on estimating the free energy difference between neighboring $\lambda$ values based only on instantaneous evaluations of potential energy differences between $\lambda$ values (or by evaluation of and extrapolation of $dV/d\lambda$ at a particular $\lambda$ value).

Which method should be used for alchemical free energy calculations? At this point, we believe the evidence is in favor of equilibrium methods. Slow-growth methods have a number of well-documented problems, such as Hamiltonian lag and hysteresis (add references). Additionally, if a slow-growth calculation does not lead to a sufficiently precise free energy estimate, the only way to improve the free energy estimate is to repeat the calculation with a slower rate of change in lambda – the simulation cannot be extended to gain additional precision, meaning that significant data can be wasted. Fast-growth methods, while conceptually appealing, do not appear to offer substantial advantages over equilibrium methods (Jarzynski, 2006, Oostenbrink and van Gunsteren, 2006).

In view of these facts, our recommendation is to use equilibrium simulations at a set of separate $\lambda$ values to estimate free energy differences. But how should these simulations be performed? Should a $lambda=0$ simulation be performed first, for example, and then the ending conformation used to seed a $lambda=0.1$ simulation? Or should they be independent? There is no reason in principle that simulations cannot be performed serially – but this leads to several potential pitfalls and disadvantages. First, if simulations are performed serially, and it is later concluded that some of the intermediate simulations were not long enough, the entire set of calculations performed after those intermediate simulations must be repeated. Second, results at one $\lambda$ value become coupled to those at previous lambda values, potentially leading to hysteresis (add references) and difficulties in troubleshooting. Independent simulations, on the other hand, provide the great practical advantage that if, at the analysis stage, it appears that data is insufficient at some range of $\lambda$ values, these particular simulations can simply be extended without requiring repetition of other component simulations. Additionally, with large computer clusters now becoming relatively common, independent simulations can often be performed in parallel, leading to significant real-time savings over running the simulations serially.

One requirement for running equilibrium simulations at independent lambda values is that simulations must separately be equilibrated at each lambda value. Significant literature exists on assessing equilibration in molecular dynamics simulations (add references), so we refer the reader there. Suffice it to say that meaningful results depend crucially on having adequate equilibration at each lambda value.

Some workers have attempted to address the need for equilibration by performing a long (many nanosecond) pre-equilibration at $\lambda=0$ before beginning free energy calculations at a variety of lambda values (Fujitani et al.). It is possible that this approach is a mistake. For example, if this pre-equilibration is substantially longer than simulation times, it may allow conformational changes in the system at $\lambda=0$ which are then not sampled on simulation timescales, trapping the system in a configuration that (while appropriate at $\lambda=0$) may not be appropriate at other lambda values. For an example of what could happen, see (Mobley et al., 2007b), where the protein remains kinetically trapped in its starting conformational state throughout all component free energy calculations. Our recommendation is that, if long equilibration is deemed necessary, equilibration periods should be equally long at each lambda value.

### Phase space overlap

Phase space overlap is an additional consideration in performing free energy calculations. Essentially, all of the equilibrium approaches for estimating free energy differences rely on having simulations run at separate lambda values that are sufficiently closely-spaced that adjoining lambda values sample similar configurations. There are three basic ways to attempt to assess whether this requirement is met, listed in order of ascending sophistication:

1. Trial and error: For a particular system, begin with very closely spaced lambda values, and then gradually increase separation until results begin to deteriorate.
2. Error analysis:
1. Using autocorrelation analysis (i.e. Chodera et al., 2006)
2. If necessary, using block bootstrap approaches (i.e. Mobley et al., 2006)
3. Phase space overlap measures (i.e. Wu and Kofke, 2005).

Different types of transformations may have different overlap properties. For example, above, we noted that electrostatics transformations are usually smooth, which, more rigorously, means that relatively good phase space overlap is maintained even with fairly coarsely-spaced lambda values. Insertion and deletion of particles, on the other hand, requires more closely-spaced lambda values to ensure good overlap.