Using enhanced number and brightness to measure protein oligomerization dynamics in live cells

Protein dimerization and oligomerization are essential to most cellular functions, yet measurement of the size of these oligomers in live cells, especially when their size changes over time and space, remains a challenge. A commonly used approach for studying protein aggregates in cells is number and brightness (N&B), a fluorescence microscopy method that is capable of measuring the apparent average number of molecules and their oligomerization (brightness) in each pixel from a series of fluorescence microscopy images. We have recently expanded this approach in order to allow resampling of the raw data to resolve the statistical weighting of coexisting species within each pixel. This feature makes enhanced N&B (eN&B) optimal for capturing the temporal aspects of protein oligomerization when a distribution of oligomers shifts toward a larger central size over time. In this protocol, we demonstrate the application of eN&B by quantifying receptor clustering dynamics using electron-multiplying charge-coupled device (EMCCD)-based total internal reflection microscopy (TIRF) imaging. TIRF provides a superior signal-to-noise ratio, but we also provide guidelines for implementing eN&B in confocal microscopes. For each time point, eN&B requires the acquisition of 200 frames, and it takes a few seconds up to 2 min to complete a single time point. We provide an eN&B (and standard N&B) MATLAB software package amenable to any standard confocal or TIRF microscope. The software requires a high-RAM computer (64 Gb) to run and includes a photobleaching detrending algorithm, which allows extension of the live imaging for more than an hour. This protocol describes enhanced number and brightness (eN&B), an approach that uses fluorescence fluctuation spectroscopy data to directly measure the oligomerization state and dynamics of fluorescently tagged proteins in living cells.


Introduction
The physiological function of proteins often involves the controlled assembly into multimeric complexes [1][2][3] . Protein multimerization or clustering mediates signal transduction in several classes of receptors, including tyrosine kinase receptors 4,5 , bacterial chemotactic receptors 6 and neurotransmitter receptors 7 , among many others. The clustering of membrane proteins regulates the strength of cell adhesion in both integrins and cadherins, as well as the formation of higher-order structures such as focal adhesions 8,9 . Viral capsids are typically large multimeric structures assembled by the self-association of many copies of a few different proteins 10 . In addition, large structural cellular components are assembled by homopolymerization of monomers into fibrils or more complex conformations 11,12 . For instance, endocytosis and vesicle transport occur after the formation of pits coated by clathrin homopolymers 13 .
In many cellular functions, the stoichiometry of the protein aggregates can tune their activity. For instance, oligomers of different sizes can modulate transcription factor affinity for DNA-binding sites or the oligomers' association with different proteins [14][15][16][17][18] . In addition, the uncontrolled self-assembly of proteins can lead to the formation of non-physiological toxic aggregates, such as fibrins or plaques of tau or α-synuclein in Alzheimer's and Parkinson's diseases, respectively [19][20][21][22][23][24][25][26][27] . Thus, understanding of both the normal function and the pathologic disorders derived from protein self-assembly requires better tools for analyzing the diversity of molecular species assembled during protein aggregation. A wide variety of experimental questions require assays to interrogate the nanoscale organization of protein assemblies. These assays should be not only capable of measuring the stoichiometry of active protein complexes, but also powerful enough to resolve the dynamics of their aggregation in live cells over time 28 . Several imaging techniques can provide quantitative information about the oligomeric state of a protein complex; however, most of them are limited in one of three experimental goals: (i) obtaining the complete temporal sequence of the oligomerization process; (ii) providing the dynamic range required to measure a broad spectrum of oligomeric sizes; (iii) recovering spatial information. The number and brightness (N&B) method uses fluorescence fluctuation spectroscopy data to directly measure the average oligomeric state of proteins in living cells, thereby satisfying all three experimental goals 29 . Here, we describe a detailed protocol for our recently developed approach to perform a statistically enhanced N&B version (eN&B) 28 . This analysis advances standard N&B by providing not only the average oligomeric value but also the distribution of oligomers for every pixel in an image during long acquisition periods.

N&B: basic principles and theory
A challenging question in fluorescence microscopy is how to measure the average number of molecules in an image and how to measure their oligomerization state or brightness. Let us consider an example with two sequences of time-lapse frames containing either four scattered fluorescent monomers or one tetramer. If the intensity changes are analyzed within a pixel by utilizing a simple average of the fluorescence intensities, this will produce indistinguishable results between the two examples ( Fig. 1). N&B instead utilizes first and second moments of the intensity distribution 29   allowing discrimination between different oligomerization states (brightness) of molecules. Larger oligomers will show an increased variance, resulting from fluctuations of wider amplitude than those of monomers, caused by diffusion of aggregates moving in and out of the focal volume. In general terms, the larger the variance, the fewer the molecules that contribute to the average. Moreover, the brightness analysis can be done simultaneously in all the pixels of an image, procuring oligomerization maps of entire cells on a pixel-by-pixel basis. All things considered, N&B is the ideal method to study oligomerization in proteins, in which aggregation is spatially heterogeneous.
The original N&B theory was developed by Qian and Elson for measurements of molecules in solution 30,31 and was adapted for live-cell studies by Gratton's laboratory 29 . N&B is a moment analysis capable of measuring the apparent average number of molecules and their oligomerization state (brightness) in each pixel from a series of fluorescence microscopy images. The ratio of the square of the average (first moment) intensity (k 2 ) to the variance (second moment, σ 2 ) is proportional to the average apparent number of particles (N). The apparent brightness, B, which represents the molecular oligomerization level, is calculated as the ratio of variance (σ 2 ) to average intensity (k).
The pixel volume covered by images obtained with optical microscopes working in TIRF mode (assuming an illumination height of 100 or 200 nm) at maximum resolution is in the range of 0.0011 or 0.0022 µm 3 , respectively. Depending on the protein size and considering physiological concentrations, this volume can harbor tens to hundreds of proteins assembled into different oligomeric states. In standard N&B, all the molecular diversity is summarized in a single average oligomerization value per pixel ranging from the monomer to roughly 100-mer species. The ability to determine oligomerization heterogeneity is limited mainly by the diffusion rate of the proteins and by the capability of the acquisition device to rapidly sample in time and across a wide dynamic range of fluorescence intensity.

eN&B: statistical enhancement
In standard N&B, for each time point, F consecutive frames are acquired for the analysis of the fluorescence fluctuations and the calculation of a single oligomerization value in the sequence. A minimum of F = 25 is advised to achieve enough statistical robustness, although F = 200 should be used for deeper analysis 29 . If the oligomer population is relatively homogeneous, the average oligomer size obtained with standard N&B may be an optimal representation of the general oligomerization state of the protein. However, in some cases, a single average value may not represent the diversity of protein complexes assembled in a single pixel. For this reason, we have developed enhanced N&B (eN&B). eN&B subsamples the entire dataset F using an analysis window of length w = 100, shifting the window one frame at a time in a circular manner until the entire dataset is covered. This statistical resampling results in a distribution of oligomeric values per each pixel.
The number and brightness values are recorded per each shift, ensuring that the same statistical weight is given to each frame. Hence, using eN&B, for each pixel (i, j), we obtain an array of F values of brightness B. Each brightness arises from a sliding window defined as follows: where e goes from 1 to F. Similarly, we obtain the corresponding F values of apparent number N.

NATURE PROTOCOLS
The trajectory of the sliding window follows the time sequence of the dataset; therefore, the statistical resampling of eN&B works as a consecutive N&B measurement with time delay equal to the frame rate. When this process is repeated for different T points (see the next section), we obtain a multidimensional matrix of data containing information from the x, y pixel position, distribution of apparent number and apparent brightness in each pixel, and time.

Simulations
The power of eN&B analysis depends on multiple factors, most notably the dynamic range of oligomer sizes, their change in aggregation and their relative abundance, as well as their absolute concentration within a given measurement point. A simulation including two opposed, complex oligomer populations highlights the benefit of the resolving power of eN&B over standard N&B. We simulated two scenarios: one with monomers gradually forming oligomers over time (Fig. 2a), and one with different oligomers coexisting in solution (Fig. 2b). In the first scenario, eN&B shows the clear advantage of capturing individual oligomerization states (Fig. 2c); in the second scenario, the spread of eN&B delivers an approximation of the actual distribution of oligomer population (Fig. 2d).

Photobleaching compensation and time expansion
In the short-term time dimension, camera-based N&B generally works in the millisecond to second range 32,33 , which is limited by the hardware capabilities of modern microscope cameras. However, to time-resolve the formation of high-order aggregates or processes running with slower, larger dynamics, the acquisition of images may require longer exposure than that offered by conventional N&B. When attempting to time-resolve long oligomerization processes through N&B analyses, the effect of photobleaching interferes with the measurements. To overcome this, we have implemented boxcar filtering algorithms 34 to detrend the decay of fluorescence intensity during the multiple light exposures in the sequential acquisition, while at the same time keeping the fluctuations intact 23,34-37 . These algorithms are implemented in our eN&B software and allow extension of the data acquisition up to 10-15 sequential time points or even more, depending on the brightness of the original sample and the frame rate ( Supplementary Fig. 1). The original work by Hellriegel et al. 34 shows that even with 50% bleaching (i.e., the final frame average intensity is 50% of the original frame intensity), boxcar filtering helps to recover the correct brightness estimation. Photobleaching can be modeled by an exponential decay 35  tend to underestimate the brightness. The optimal boxcar size is dependent on bleaching speed. A smaller boxcar size should be used with faster bleaching (larger α) to optimally recover brightness. In a recent work 38 , exponential filtering detrending permitted time-resolution of the transition of monomers to dimers of an FKBP1-tagged fluorescent protein and corrected images with up to 25% of bleaching 38,39 . In both cases, boxcar and exponential filtering, selection of the right window size was critical to correcting the bleaching without discarding the actual fluctuation. A boxcar window of ten frames was chosen in our software because, as described in the original work 34 , in a biological context, this range will not affect the higher-frequency fluorescence fluctuation of fast-diffusing species.

Applications of the method
The oligomerization of a large number of proteins has been elucidated through N&B analysis. Examples demonstrate the applicability of N&B to a broad variety of protein families, with localization at all major cellular compartments. In the cytosol, N&B has been used to resolve the oligomerization dynamics of focal adhesion components such as paxillin and actin [8][9][10]29,40,41 , and the assembly of viral matrix proteins [8][9][10]29,40,41 . A number of membrane proteins have been subjected to N&B analysis, including annexins and uPAR 34,42 . N&B has also been applied to the study of signaling pathways, including p75, LRRK2 (refs. [43][44][45] ), ErbB1 and ErbB2 receptor tyrosine kinases 46 , and proteins involved in membrane-lipid-dynamics such as dynamin 2 (refs. [47][48][49][50][51] ). In the nucleus, N&B has elucidated the ligand-induced aggregation of transcription factors and has been used to discriminate between different oligomer subpopulations 20,52,53 . In addition, N&B has been used to study how DNA repair proteins bind to the DNA following the recruitment of double-strand-break factors 54 . N&B has also been applied to the study of pathogenic aggregation of peptides causing neurodegenerative diseases, such as huntingtin or alpha synuclein 24,36,55 . Fluorescently tagged molecules other than proteins can also be studied by N&B, examples of which include the aggregation of DNA after lipofection 56 .
In our work, we used eN&B to study the oligomerization of the EphB2 receptor during 1-h timelapse measurements following receptor activation. The Eph receptor is a membrane-tethered protein that forms large aggregates upon interaction with its cognate ligand, ephrin 57 . Despite playing a critical role in neural development, tissue patterning and regeneration, the dynamics of Eph receptor clustering was poorly understood 4,58,59 . We performed eN&B analysis of fluorescently tagged Eph to obtain data on the receptor's oligomerization state over time. The quality of eN&B data allowed mathematical modeling of receptor clustering and the proposition of a new mechanism for Eph signaling, termed polymerization-condensation 28 . In our experimental setup, Eph-expressing cells were stimulated with the ephrin ligand, which was presented in four different spatial configurations, namely, ligands in solution, micro-printed ligand dimers, micro-printed ligand clusters and nanopatterned clusters 60 . eN&B analysis was able to capture sensible variations between the different modes of ligand presentation and retrieved characteristic oligomerization dynamics for each mode.

Comparison with related methods
Several different methods have been developed that can be used to study the oligomerization states/ dynamics of proteins in vivo. In this section, we will briefly highlight the key alternative approaches and their advantages and disadvantages as compared to eN&B.

Spectroscopy methods
Spectroscopy methods include N&B and a broad collection of techniques that measure the fluorescence intensity of molecules as they diffuse in and out of the focal volume (for a comprehensive review, see ref. 61 ). Arguably, the most popular spectroscopy application is fluorescence correlation spectroscopy (FCS), which is widely used to efficiently measure the diffusion coefficients of fluorescent molecules and the variation in those coefficients due to the presence of different molecular species (e.g., bound or unbound pairs, oligomers). FCS can also be adapted to measure the oligomerization of proteins, provided that proper calibrations are performed 62,63 . FCS typically works on single pixels (with few exceptions 64 ), and it may therefore be challenging to capture the full diversity of oligomeric states using this approach.

Photon-counting histograms
The photon-counting histogram (PCH) method was originally developed by Chen et al. 65 and is the first method that can be used to extract molecular number and brightness information from fluorescence fluctuation data. PCH is capable of resolving heterogeneous molecular populations 66 and it has been applied to resolve mixed oligomer populations of membrane receptors 67 . The information attainable by PCH is robust and complete; however, it is limited to single-point detection. It also requires longer times for data acquisition and data analysis time as compared to N&B.
Fluorescence resonance energy transfer imaging Fluorescence resonance energy transfer imaging (FRET) imaging is based on the detection of variations in the fluorescence intensity of a protein due to energy transfer to an acceptor protein located in close (nanometer range) proximity. This is a very sensitive approach for detecting the interaction of protein pairs, or in qualitative terms, the formation of oligomers. FRET imaging includes a diverse collection of approaches, such as sensitized emission, acceptor photobleaching or anisotropy-based homoFRET 12,58 . These approaches show different capabilities in regard to quantification of the stoichiometry of a narrow oligomeric range 68 . The most sensitive FRET versions, which include single-molecule detection 69,70 and fluorescence lifetime imaging microscopy (FLIM) 71 , can be used to quantify a larger range of oligomeric states, but data acquisition is relatively slow (on the order of minutes) and is better suited to capturing the dynamics of slow assembly processes such as amyloid aggregation.

Other methods
Super-resolution microscopy and single-molecule detection can also be used to estimate the number of proteins contained in a complex by counting fluorophore photobleaching steps 72,73 . Intensitybased methods can quantify local concentrations of proteins but cannot extract the oligomer size distribution.

Limitations of eN&B
The camera-based eN&B technique is dependent on the system capabilities in regard to acquisition of short-exposure images while maintaining high collection efficiency (sensor quantum yield) and a high collection rate with low noise. These characteristics will determine the highest protein diffusion rate that can be imaged using this technique 74 . The protein diffusion rate will also determine the ideal time resolution of the consecutive eN&B measurements. Fast-diffusing proteins will require short exposure times, resulting in a very fast 100-to 200-frame acquisition. For slow-diffusing proteins, the camera exposure time will be longer, and therefore capturing 200 frames will take a substantial amount of time. Even if a second time point were to be captured right after the first one, there would be a minimum lapse on the order of minutes, between the start of the two consecutive time points. In extreme cases, in which the protein-binding kinetics is fast, the amount of clustering occurring during a single F = 200 acquisition may be substantial. In most cases, however, the characteristic acquisition time will be faster than in standard FCS or FLIM applications.
The characteristic diffusion rate for proteins inside cells, considering different sizes and cell compartments, ranges between 30 and 0.03 μm 2 s −1 (ref. 75 ). This range can be captured with an approximate exposure time range of 1 s to 0.05 ms. Most cameras will be able to deal with the slow side, which typically corresponds to protein diffusion rates within membranes, without issue. However, acquiring 200 frames at 1 s per frame will expose the cell to a considerable amount of light, and photobleaching will have to be assessed carefully. The fastest diffusion rates that can be captured by eN&B will be limited by both the shortest exposure time and the fastest acquisition speed of the camera, which, at the time of writing, for most brands are~0.5 ms and~100 frames s −1 , respectively. If using confocal scanning microscopes, the single-pixel dwell time will be considerably shorter, on the order of microseconds. However, point-scanning systems trade off this speed against the time taken to scan through an entire image, which will be considerably longer, as well as the lower sensitivity of the detectors, which peaks at a quantum efficiency (QE) of 45% (GaAsP), as compared to the currently available 95% QE for high-sensitivity cameras (EMCCDs and backside-illuminated complementary metal-oxide-semiconductor (BSI-CMOS)). When working with fast-diffusing species (small peptides in the cytoplasm), two things must be considered. First, if the photon budget is low, a short exposure time will not be sufficient to collect enough photons to reach an optimal signal-tonoise ratio. Second, the relative diffusion rate of GFP should be taken into account when fusing this to small peptides. The diffusion rate of GFP in the eukaryotic cytoplasm is~27 μm 2 s −1 (refs. 75,76 ) and molecular species diffusing faster than GFP may be slowed down when fused.
We have not performed a formal analysis of the optimal range of oligomers that can be measured with eN&B. However, theoretical and experimental measurements with the Eph receptor suggest that eN&B can discriminate oligomers within the 1-to 40-mer range without saturating the intensity signal. Mathematical estimations show that expanding that range up to 100-mer improves the fitting between imaging data and mathematical models 28 , which suggests that eN&B might be applied to an even broader range of species. However, this evidence is theoretical, and a formal study of the oligomer range is missing in the N&B field.
The camera exposure time should be set up to meet the log-linear region of the autocorrelation curve obtained during FCS measurements of the monomer, so that all proteins moving in and out of the focal volume are captured by the camera (see Experimental design 'Protein diffusion and camera exposure calibration'). However, the diffusion coefficient of the protein may decrease with the size of the oligomer 77,78 , which implies that, given a certain threshold size, the time during which the camera collects photons will be an oversampling of the actual aggregate dynamics and an artifactual reduction in the number of oscillations (i.e., the larger aggregates may need a longer time than the one set as the camera exposition time, in order to come in and out of the focal volume). We use, as a rule of thumb, a cutoff of 40-mer as the upper limit for oligomer detection. However, a mathematical fitting of the empirical data from EphB2 oligomerization suggested that establishing 100-mer as the upper detection limit would result in minimum information loss and better fit of the equations compared to the analysis using a 40-mer upper limit 28 . This broad range of detection may be even wider for membrane proteins, for which diffusion is not affected by the size of the oligomer 78 .
Other important parameters to consider in calibrating and designing the experiments in this protocol are the linearity of the signal output and the dynamic range of the detector. Given a specific setting configuration for the acquisition device (in this case a camera), the measured intensities must scale linearly with the input photons. The detector's dynamic range will set the limit in capturing the larger-intensity fluctuations. If labels of lower molecular brightness are used, the number of frames can be increased to reduce statistical noise and spreading of the standard deviation of the brightness values 29 .
The statistical resampling leading to eN&B tackles the limitation of N&B in providing any information in addition to a weighted mean aggregate size per pixel. eN&B cannot discriminate perfectly the relative concentrations per oligomer size, but it adds more statistical representability to the estimations than standard N&B (Fig. 2). The resampling in eN&B is analogous to the standard analysis done on time series to produce frequency spectra, in which an ideal spectrum with discrete and separated tonal frequencies is used to create a synthetic signal 79,80 . The result is a spectrum with side bands and aliasing that does not really reproduce the original discrete tones. To faithfully reproduce the original spectrum, very long samples with high sampling rates and a completely ergodic series would be necessary. On the other hand, when applied to time series with a broadband (continuous) spectrum, the resampling recovers reasonably well the spectrum, even with suboptimal sampling parameters. Therefore, eN&B is an optimal algorithm for resolving the oligomerization of proteins over time. During most polymerization processes, a broadband distribution of oligomers sequentially moves to a higher central size and dispersity. eN&B will not unmix oligomers when perfectly overlapping in time and space or extract a distribution that exactly mirrors the real one, partially because sampling rates and series are limited by technology. However, the distributions will be centered on the dominant oligomers, and oligomers far from them will be gradually underrepresented at some rate. This was found to be better than a single average of the entire population (as in N&B), and it delivered results more consistent with theoretical-mathematical models 28 .

Microscope setup
N&B has been implemented successfully with multiple types of fluorescence microscopes, both singlepoint scanning systems and full-field camera-based systems. In particular, single-point scanning systems with analog 20,32,54,81 and photon-counting modes were used, both in confocal (single photon) 9,24,36,42-44,48,53 and two-photon mode 8,34,49 . EMCCD cameras have been used only in TIRF microscopes 10,32,33,40,50 . Other systems have been used for FCS analysis; hence, in principle, they can be used for N&B and eN&B. Such systems are selective plane illumination microscopes 82 and spinning-disk confocal microscopes 64,83 . Enhanced versions of such systems, such as lattice light sheet or 2-photon spinning-disk microscopes, are likely to enable eN&B.
Here, we describe a detailed protocol to perform eN&B on an EMCCD camera-based TIRF microscope because of the superior signal-to-noise ratio offered by such system. TIRF illumination is restricted to a 100-to 200-nm region immediately adjacent to the glass-water interface. Although the plasma membrane is the ideal compartment for TIRF microscopy, it can also be used to study cytosolic proteins; it is possible to reach 1-2 μm deep into the sample when imaging slightly below the critical TIRF angle (oblique incidence geometry). This range allows imaging of actin, tubulin or even nuclear proteins, although it is important to keep in mind that there will be a contribution from outof-focus fluorophores 84,85 .
Following the calibration strategies described in the original paper on the N&B method 29 , the eN&B method and software can also be used to analyze data obtained from confocal 9,24,36,[42][43][44]48,53 and two-photon microscope setups in both photon-counting 8,34,49 and analog modes 20,32,54,81 , as well as those obtained from light sheet systems 82 .

Experimental design
Cell culture preparation For all the experiments, glass-bottom dishes compatible with confocal and TIRF microscopes are first coated with a cell-adhesive polypeptide. When using a TIRF microscope, we recommend keeping the plate brands and model constant for all the experiments. Different brands and models may have different thicknesses, which affect the TIRF angle and the objective working distance.
The dishes are incubated under the cell culture hood with 300 ml of poly-L-lysine (PLL) diluted to 0.05% (wt/vol) in PBS for 90 min at room temperature (15-25°C) and then rinsed three times with PBS and Milli-Q water. Controls should be carried out to make sure that the cell-adhesive coating does not affect the protein under study. We found out that laminin can activate the Eph receptor efficiently.
At this point, coated dishes can be air-dried and kept at 4°C for a maximum of 24 h before the next step. In the example described here, ephrinB1-Fc (R&D Systems) was selected as a ligand for the EphB2 receptor and was presented to the cells (at Step 14 of the Procedure), either in soluble form or immobilized on the substrate through a printing procedure. Control (mock) stimulation was performed on PLL-coated slides and on slides with Fc fragment (Jackson ImmunoResearch) printed onto a PLL coating.

Replicates and controls
Several controls can help to place the brightness measurements in the context of protein dynamics. Positive controls should use antibodies or molecules to induce the oligomerization of the protein of interest with high efficiency. In a negative-control experiment, the oligomer should remain unassembled during the entire time-lapse recording. This can be achieved by imaging the cells in the absence of any induction or by using inhibitory drugs. Mutant proteins, such as negative dominants, can be used to calibrate the sensitivity of the method to different oligomerization kinetics. Photobleaching can be quantified by integrating the fluorescence from a single cell at the beginning and the end of either a single time point or the entire time-lapse recording. A correct experimental design should also include sufficient replicas to obtain statistically significant data to compare the different controls or samples.

Instrument calibration
Ideally, image acquisition settings are determined once at the beginning of an experimental project (Steps 1-10) and maintained constant throughout the experimental procedure for consistency 32,33 . The sections below describe steps for determining key camera and illumination settings affecting the fluctuations extracted in eN&B.
Camera noise calibration. It is important to optimize the camera's dark count and signal-to-noise ratio at multiple pixel readout rates and with several EM gain settings, respectively (Step 15). This serves to optimize the image acquisition conditions in a trade-off between speed and instrumental background noise. Camera dark current can be measured with the shutter closed and a 500-ms exposure time (for a description of the choice of exposure, see Experimental design 'Protein diffusion and camera exposure calibration'). Recordings of 200 frames should be obtained at several pixeltransfer rates. The dark count histograms obtained are analyzed regarding their mean and standard deviation values, as well as their uniformity across the EMCCD chip. This calibration step aids in identifying excessive differences in pixel noise, hot pixels and unusual noise patterns, which might affect the analysis and hence should be excluded.
Gain. We recommend optimizing the EM gain to maximize the signal-to-noise ratio of the images (Step 7). This can be done one time and the result can be used as the standard gain value for a particular microscope (even for other applications). This calibration can be done by imaging fluorescent proteins or fluorescently labeled antibodies adsorbed to a clean glass surface and minimizing the signal's coefficient of variation (see Box 1 for a step-by-step procedure for gain optimization).
Pixel size. In our lab, imaging is performed with a Nikon NSTORM system equipped with a ×100 Apo TIRF numerical aperture (NA) 1.49 oil-immersion objective and a ×1.5 tube lens engaged (Step 6). However, other TIRF and confocal microscopes can be used provided that high-NA objectives are used. The pixel size is determined using the following equation: where P si is the pixel size on the image, P sc is the pixel size on the camera sensor, M is the objective magnification and r l is the relay lens. For our setup, we obtain a final pixel size of 106 nm. A small pixel size is essential for measuring signal fluctuations correctly.
Laser. Illuminating laser power should be determined using two empirical criteria (Step 5). The first is to ensure that the fluorescence intensity attributable to the fluorescent construct (in our case, Eph-mRuby) is solidly in the middle of the camera's dynamic range (i.e., peak value~35,000 digital levels on 16-bit images), while ensuring there are no saturated pixels. The second criterion is to minimize photobleaching during a single 200-frame acquisition such that the final average intensity within an imaged cell shows no more than~5% reduction in fluorescence as compared to the average intensity in the first image.
TIRF angle. We routinely use the commercial Nikon setup for stochastic optical reconstruction microscopy (the NSTORM microscope for our experiments). This setup has a robust optical design for focusing the illumination laser light onto the back focal plane of the objective to produce TIR. As a result, the position of the TIR focusing lens can be adjusted once and repeatedly used in the same setting to obtain a similar evanescent field over several imaging sessions. In our example, we imaged cells expressing Eph-mRuby that were strongly adhered to the glass surface and well spread, allowing Box 1 | Determination of the EM gain setting that maximizes the signal magnitude/signal fluctuation ratio This box describes how to calibrate the gain of an EMCCD camera to maximize signal amplification without amplifying the noise. Gain calibration needs to be done once, and it produces a characteristic value for the camera (useful for eN&B as well as any other technique). Procedure 1 Treat a LabTek chambered glass slide with 1 M NaOH for 10-15 min and let it air-dry.
2 Dilute a fluorescently labeled protein/antibody in PBS to~0.1-1 ng ml −1 . c CRITICAL STEP Prepare a sample of fluorescently labeled proteins/antibodies using a fluorophore in the same spectral range as will be used in the experimental project (i.e., ATTO 488 for GFP). 3 Incubate the diluted solution on the glass surface for 2-5 min. 4 Wash the slide thoroughly with PBS. 5 Image the prepared sample to ensure that individual fluorescent proteins/antibodies can be visualized as isolated bright spots on a dark background. c CRITICAL STEP If too many/too few individual spots are visualized, repeat sample preparation and adjust the protein/antibody dilution or the incubation time. 6 Record several hundred frames, imaging the single molecules over a wide range of EM gain settings to image the single molecules as they gradually photobleach. c CRITICAL STEP We suggest the following as a starting range: 10-1,000 gain in log-scale intervals (i.e., 10, 30, 100, 300 and 1,000). Recommended exposure time: 30-50 ms. 7 Perform single-particle tracking of the spots by fitting each of them with a 2D Gaussian function with constant offset. c CRITICAL STEP Several types of open source software are available to do this, including u-track 95 and ThunderSTORM 96 . Note that the latter will require that localizations in each frame be linked together to obtain a fluorescence trajectory for each single protein/antibody imaged. 8 Remove the contribution from background fluorescence before fitting. c CRITICAL STEP If the software used does not provide this quantity (background count) as an output, obtain it by subtracting the number of background photons/counts from the integrated number of photons/counts. 9 For each EM gain setting, first calculate the mean and standard deviation of the background-subtracted photon/count values for each single-molecule fluorescence trajectory. Second, calculate the ratio of the mean to the standard deviation for the fluorescence trajectory. The maximal value of this ratio is the optimized EM gain setting. 10 If camera settings allow different pixel transfer rates, repeat steps 6-9 at different readout speeds (e.g., 1 MHz, 5 MHz and 10 MHz).
us to select a field of view showing isolated, non-overlapping cells. The TIR lens position can be adjusted to optimize visualization of (i) disappearance of intracellular vesicles that transport the labeled membrane protein and (ii) increase of detected fluorescence arising from a local field enhancement near the critical angle for the water-glass interface. It is important to ensure that the intensity counts are consistent between experiments with different TIR lens positions. This optimization process is rapid and can be easily performed for each experiment in microscope systems showing lower robustness in TIR lens positioning (Step 10). Camera readout mode. The commercial NSTORM microscope we use is equipped with an Andor iXon 897 EMCCD camera capable of either a 10-MHz readout rate at 14-bit or a 1-MHz readout rate at 16-bit. We use the slower 1-MHz rate to access the larger 16-bit range and obtain the larger dynamic range during acquisition while minimizing readout noise (Step 7). The same rationale should be followed for systems equipped with a different camera. The 10-fold slower camera readout rate is not problematic because of the relatively long (500-ms) exposure time our measurements required.
Analog number and brightness calibration. Fluorescence microscopes are affected by instrumental noise. As such, the analysis requires a calibration step, which is instrument dependent, particularly for analog mode. Previous work 81 addressed the problem and serves as a base for the calibration approach described here. A set of dark images contains the information required for eN&B calibration. Two components can be discerned in the intensity distribution of these dark sets (Fig. 3a): one Gaussian part and one exponential part (linear part on log scale). The center of the Gaussian component represents the offset of the system, whereas its standard deviation is the readout noise (Fig. 3b). The exponential component is used to obtain the conversion factor from intensities measured and photons, extracting the slope S of the curve (Fig. 3c). The calibration should be performed separately for each experiment, as subtle variations are observed with a day-to-day usage of the instrument. We provide a software packaged with an automated fitting tool for this purpose (http://bioimaging.usc.edu/software.html) (Fig. 3d) (Step 22).
Monomer brightness calibration. The brightness of single monomers must be estimated from samples in which the protein exists in its free form in a monomeric state. In our case study with the Eph receptor, we imaged cells that were seeded for 24 h on PLL-coated plates and had no exposure whatsoever to any cognate ligand. For any other membrane receptors, a similar procedure must be performed, avoiding serum components or coatings that may bind or interfere with the oligomerization of the protein. Excessive overexpression may also trigger self-aggregation of proteins and   must be also avoided. For other proteins, it is important to identify where the protein is found in a monomeric state 29 . If obtaining a monomeric population of the protein of interest is not possible, a variant with truncated or mutated oligomerization interfaces can be generated, as long as the diffusion rate is similar to that of the native protein.
Based on the brightness value of the monomer, the brightness of the different oligomers (i-mer) can be calculated as follows: where B imer is the brightness of the i-mer, i is the size of oligomer and B monomer is the measured brightness for the monomer. It is important to note that some fluorescent proteins are known to selfaggregate or work as dimers 86 . To overcome fluorescent protein-induced dimerization artifacts, several monomeric fluorescent proteins have been described [86][87][88] .
Protein diffusion and camera exposure calibration. It is essential to determine the ideal camera exposure rate so that fluctuations are accurately captured between frames (Steps 8 and 9). This parameter is related to the diffusion rate of the protein of interest and can be defined using the autocorrelation function (ACF) from FCS analysis (Fig. 4). The details of the protein mobility coefficient may be biologically relevant in addition to the oligomerization dynamics resolved by eN&B. In the interest of space, readers are directed to a number of excellent review and method articles about FCS [89][90][91] .
An important factor to consider is the fluorescence density of the sample, as FCS works optimally when a low concentration of protein is present. Molecular crowding saturates the focal volume and reduces the amplitude of the ACF. In FCS, this can be avoided by selecting low-expressing cells or controlling the level of fluorescence by using a photoactivatable GFP (paGFP), in which a subset of tagged proteins can be activated before FCS measurements. Photoactivatable proteins yield robust FCS results 92 but are not a strict requirement. They cannot be used for eN&B because the dark species would artificially reduce the brightness B value. The diffusion measurements and eN&B should thus be carried out using the same standard (monomeric) fluorescent protein.
Most confocal microscopes have the capability of performing both FCS and N&B measurements. A number of commercial platforms, including Zeiss and Olympus, now provide FCS modules that will automatically compute the ACF curve and provide mobility coefficients. Alternatively, raw data can be analyzed through a number of ImageJ plugins (https://imagej.nih.gov/ij/download.html) 93 or SimFCS (Gratton Lab, University of California Irvine: https://www.lfd.uci.edu/globals). Using a paGFP-tagged EphB2, we previously used established protocols 28,92 to determine that the most appropriate camera exposure time is 500 ms when using the Zen FCS module of a Zeiss 780 platform (see Box 2 for a detailed procedure).
Once the protein mobility coefficient (δ) and the focal volume waist (ω 0 ) are known for the protein of interest (Box 2), these together with the following guidelines can help the reader to choose the optimal acquisition parameters. The average time a protein remains in a focal volume (pixel), also known as residence time, can be computed as ω 0 2 /4δ 38 . For camera-based microscopes, the time to take a single whole frame, t frame , depends mainly on the camera's technical specifications, such as readout rate, number of pixels per frame and exposure (or dwell) time, t dwell , needed to collect the protein fluorescence signal. When analyzing proteins with small mobility coefficients  (0.03-0.04 µm 2 s −1 ) with a fast camera (10 MHz, 512 × 512 pixels), readout time is~26 ms, and t dwell (500 ms) is almost equivalent to t frame (526 ms). Therefore, in these systems, to capture fluctuations (particles moving in and out of the focal volume), the exposure time is selected in such a way that t frame > ω 0 2 /4δ to allow the proteins to scatter through several pixels. If the sample is bright enough, the t frame value can be increased simply by pausing after each acquisition 1 . This avoids averaging out fluctuations and may increase the statistical significance of the fluctuations. For laser-scanning microscopes, t dwell is the time to collect the fluorescence signal at a single pixel and t frame depends then on the number of pixels p as t frame ≥ p t dwell . In these microscopes, t dwell should be shorter than ω 0 2 /4δ to avoid averaging out the fluctuations, i.e., to reduce the probability of a particle entering or exiting the focal volume; and t frame should be long enough to observe particle fluctuations (t frame > ω 0 2 /4δ > p t dwell ). Therefore, t dwell and t frame can be readily selected for a particular microscope configuration if the approximate δ value is known.

Acquisition framework
Once the cells are ready to be imaged, the acquisition starts by capturing 200 sequential frames with an exposure time per frame that is proportional to the diffusion rate of the protein, as determined by FCS, raster image correlation spectroscopy (RICS) 94 or an equivalent method (Steps 16-21). The camera exposure time will determine the interval between time points: because the 200 frames are treated as a single time point, the longer the exposure time, the longer it will take to capture the 200 frames. For short exposure times, many positions can be recorded at approximately the same time, and for longer exposure times, the time to return to the same position may make the intervals Box 2 | Application of FCS to quantify protein mobility coefficient and define camera exposure settings This box describes a protocol for carrying out FCS using photoactivatable proteins. This approach allows extraction of protein diffusion coefficients (δ) efficiently. δ values are required to calibrate the camera exposure time during eN&B imaging. The detailed original protocol can be found in ref. 92 . Procedure c CRITICAL The focal volume waist (ω 0 ) and structural parameter (S) must first be calibrated using a fluorophore with a known diffusion coefficient such as eGFP, FITC or certain Alexa/ATTO probes of the relevant emission channel (steps 1-3).
1 Place a drop of~1 nM solution of Atto488 in water onto a glass coverslip directly over the water ×63/1.4-NA objective of a Zeiss 780 confocal microscope. At 25°C, this small molecule has a known diffusion coefficient of 400 μm 2 s −1 . To activate the paGFP, draw an ROI along the membrane, using the membrane-mCherry as a guide. Activate the paGFP using the 405-nm laser line, with the laser power and dwell line optimized for each cell and level of transfection: the GFP signal should be visible, but the fluorescence should still be sparse. c CRITICAL STEP Activating a small defined region will reduce phototoxicity and experimental time. 7 Within the Zen FCS module, select the point of acquisition with the crosshairs option, using the mCherry reference to identify the membrane. In our setup, the FCS measurement of EphB2-paGFP was carried out with the 488-nm laser line by acquiring 4 × 25-s cycles of data collection. Multiple cycles were collected because the membrane may move in and out of focus slightly, so the best of the four cycles was analyzed. c CRITICAL STEP Measuring small structures such as membranes can be difficult; therefore, the application of scanning FCS may be desirable. 8 Fit the FCS data to a previously published two-species model 97 . This provides the ACF and in our case resulted in the identification of two rates of motion: 0.03 and 0.04 μm 2 s −1 at regions with and without cell-to-cell contact, respectively.
between time points too large in order to properly resolve the dynamics of the desired protein.
Switching between positions must be done manually, unless a custom macro is set up for each specific microscope system. If automatization is not possible, time annotation must be done manually at the beginning of each time-point acquisition. This requires the presence of the researcher for the entire duration of the acquisition. For continuous imaging, we advise dividing the acquisition into contiguous batches of 200 frames and treating them as individual time points.

eN&B analysis
We developed a user-friendly software package to perform eN&B that makes use of an intuitive interface (Steps 22-30). The software and an example of the dataset analyzed during this study are available at http://bioimaging.usc.edu/software.html (see also Supplementary Video 1). Our software can be used to extract brightness values from fluorescence fluctuations for time-lapse image sequences. The code currently requires data to be organized as multiple multilayer stacks of images acquired at different time points, for which a single file contains a sequence of images (see Experimental design 'Acquisition framework' and Fig. 5a). The software can perform two types of analysis, (i) full statistical resampling, which performs window-frame analysis on each of the time-point image sequences, providing a distribution of oligomerization states for each pixel; or (ii) single-value analysis, in which only the mean value of oligomerization is reported. The full statistical resampling (i) performs eN&B analysis enhancing the statistical resolution of the method at the expense of a longer computational time. The single-value analysis (ii) can be used to perform a rapid overview analysis of the experiment.
The software uses LOCI Bio-Formats (https://loci.wisc.edu/software/bio-formats) to load microscopy data (Nikon proprietary file format in our case). In an effort to simplify adoption of the technique, we have created a TIFF file importer. The user can convert proprietary file formats (e.g., Olympus, Zeiss or Leica) to TIFF sequences before performing analysis. If TIFF file sequences are used, the number of frames per time point and the total number of time points must be specified at the software interface.
An image of the time series is then presented to the user for the purpose of selecting an ROI in the field of view (Fig. 5b). This allows selective analysis of specific cells and inclusion of part of the background for reference during analysis. In the resulting scatter plot, each pixel of the image is represented in terms of intensity and brightness (Fig. 5c,d). The portion in the ROI related to background will generally provide a cluster at lower intensity values, whereas the sample will be shifted toward higher intensities. Manually selecting the boundary between these clusters is necessary for ensuring correct calculation of oligomerization levels.

Output data
The eN&B software produces a series of images, plots and datasheets containing the measurements from the brightness analysis ( Fig. 6): • Raw 16-bit TIFF grayscale images of the selected cell for each time point after photobleaching detrending. Only the first of the 200 frame series is shown (Fig. 6a). • The oligomerization maps show color-coded images of the cells with each pixel color-coded (using the MATLAB jet colormap array) on a scale according to the average oligomer size present in each pixel. Different oligomer binning options are presented to enhance oligomer populations contained in a narrow range of sizes (Fig. 6b). Each binning option corresponds to a differently equalized colormap focusing on smaller, medium-sized or larger oligomers, or just evenly representing them, and saves them as 16-bit TIFF and PNG files. • i-mer plots display the time evolution of up to 40-mer oligomers (Fig. 6c). These values are provided for multiple tolerances (sigma) around the value of the monomer. • The abundance distribution of oligomers accumulated for all pixels in the image per each time point for the single-value analysis (Fig. 6d) or the statistically enhanced analysis (Fig. 6e). Of note, these distributions are not normalized by the total amount of pixels; therefore, the integral of the distribution grows with the cell size. Raw data are also provided to allow the user to handle data independently. • Excel files containing image-histogram sum and percentage data from eN&B analysis are provided.
The values include the total number of pixels inside the selected ROI that are at a specific oligomerization level per time point. These values are provided for multiple tolerances (sigma) around the value of the monomer. Different files are provided for the quantification of the monomer to 40-mer range or the monomer to 100-mer range. • The full eN&B file can be saved as a MATLAB (.mat) file and includes the oligomerization distribution of each pixel for every time point.

Materials
Biological materials • Cells expressing fluorescent proteins: in the example described in this protocol, we used the HEK293T: EphB2_mRuby cell line, which was generated by lentivirus transfection (ViraPower Lentiviral Packaging Mix, Thermo Fisher Scientific) of the plasmid pLenti.CMV:EphB2_mRuby. The plasmid pCDNA3_EphB2_mRuby was used as a source plasmid to excise the fusion construct. The cloning protocol is detailed in the original publication 28 . The plasmids pLenti.CMV:EphB2 mRuby and paGFP-EphB2are available from the authors upon request. Alternative generic genetic constructs for expression of fluorescent proteins, including mCherry or paGFP, can be obtained via Addgene • HEK293T cells were purchased directly from the distributor to avoid misidentification or crosscontamination (Sigma-Aldrich, cat. no. 85120602; ATCC, cat. no. CRL-3216) ! CAUTION The cell lines used in your research should be regularly checked to confirm that they are authentic and that they are not contaminated with mycoplasma.  Table-

Procedure
Setup of the microscope • Timing 1 h to reach desired temperatures 1 Warm up the microscope 1 h before starting the experiment to allow the temperature to stabilize, matching the sample's optimal temperature (e.g., 37°C).
? TROUBLESHOOTING 2 Turn on the CO 2 and set the controller to 5%.
? TROUBLESHOOTING 3 Turn on the camera~30 min before starting the experiment so that it reaches the optimal working temperature (i.e., −70°C for an Andor EMCCD). c CRITICAL STEP The camera readout is very sensitive to temperature oscillations. 4 (Optional) Create a logbook text document on your PC. Clearly describe positions and time points in the document (Table 1). This step is not necessary if your microscope allows automatic configuration of the imaging conditions. 5 Activate the relevant laser lines, allowing power sources to stabilize before image acquisition (in the example using our HEK293T:EphB2 mRuby cell line, we use the 561-nm laser). Set the laser power to a previously determined power density that minimizes photobleaching (see Experimental design 'Instrument calibration'). 6 Ensure the light path is correctly specified: • Choose dichroic mirrors and emission filters appropriate for the lasers and fluorophore, respectively, being used. • Adjust additional magnification optics required to obtain the desired pixel size (106 nm here; see Experimental design 'Instrument calibration'). 7 Set the camera gain and readout to previously determined values (see Experimental design 'Gain' and 'Camera readout mode'). 8 Specify the number of F frames per position and time point (e.g., F = 200). 9 Set the desired frame exposure time according to FCS measurements (see Experimental design 'Protein diffusion and camera exposure calibration'). 10 Determine the optimal TIRF setting and proper illumination power. The objective here is to obtain high-signal, low-background images with low photobleaching rates.
? TROUBLESHOOTING  c CRITICAL STEP Image plotting is RAM expensive. If all images are shown upon calculation and <64 GB of RAM is available in the system, it is possible that the workstation will run out of memory before completion of the analysis. • For fast N&B analysis, use the standard code. • For a statistically enhanced analysis, use the eN&B code. 24 Select the files from an entire time series from a single position.
? TROUBLESHOOTING 25 Select a cell of interest by creating an ROI (Fig. 5b). We suggest a physical size of the cell area to be analyzed that is larger than 64 × 64 pixels. c CRITICAL STEP An optimal ROI should include a portion of the background outside the cell to provide a reference during analysis. 26 Double-click on the cell ROI to complete the selection. 27 In the brightness scatter plot, establish the signal/noise threshold by marking the edge with the right end of the rectangle. If the selection of the ROI is performed correctly, the plot should show easily distinguishable clusters (Fig. 5d). The threshold should be placed at the lowest values of the right-most cluster, representing the cell.
? TROUBLESHOOTING 28 Double-click on the corner of the ROI to trigger the analysis (Fig. 5d). If all images are being shown in Step 23, the process can be observed until completion. 29 A file-save window will prompt the user to provide a root name for saving the bulk files (images and Excel files with raw data). The software by default saves the images in TIFF and PNG formats.
If further editing is required, save the individual images by clicking 'Save as' on the relevant window (i.e., .fig or .ems extensions). 30 Use the logbook data to specify the specific time-point values in the Excel data (Table 1).

Troubleshooting
Troubleshooting advice can be found in Table 2. The microscope setup is not optimal The parameters that allow you to obtain a better signal are the illumination power, camera gain and exposure time. Increasing the illumination power will result in higher levels of photobleaching but will increase the signal efficiently. Increasing the exposure time will also result in higher levels of photobleaching, although the signal increase will be smaller and will affect the brightness analysis. Increasing the camera gain will not affect photobleaching, but it will increase the noise 13 Finding suitable cells takes too long Cells were seeded at low confluence Increasing the cell confluence will increase the chance of finding suitable cells for imaging, and also the probability of having more than one cell in the same field of view. The software can analyze the cells individually, so having more than one cell in the same field of view will accelerate data acquisition The cells change shape and move Try a different coating on the glass-bottom plate (e.g., PLL, laminin (BioLamina, cat. no. LN521) or gelatin (Merck, cat. no. G9136)) 16 The cell images are dark The camera shutter is closed Make sure to re-open the camera shutter after acquiring the last dark frame 20 The cells move Cell motion (due to migration, movements from filopodia or similar projections, cell spreading and so on) may be unavoidable and it interferes with brightness measurements If the microscope allows recording of several positions simultaneously, we recommend capturing as many cells as possible. Review the videos at the end of the process and discard the motile cells. The selection of a proper adhesive coating will improve the results The recorded positions seem to change between time points The plate is drifting Make sure you allow at least 1 h for the microscope to reach and stabilize at the desired temperature. Excessive amounts of immersion liquid will also increase the probability of drifting. Make sure the stage holds the plate tightly 24 The software crashes before finishing the analysis

Anticipated results
The protocol detailed here can be used to image and quantify the oligomerization dynamics of proteins. Fluorescently tagged proteins are directly observed using a TIRF microscope during serial image acquisition. Imaging is carried out at the maximal resolution allowed by the microscope setup. The number and brightness values (oligomerization) of the proteins in each pixel are a function of the variance and intensity of the fluorescence fluctuations. The brightness values for all pixels in a cell can be visualized as a color-coded oligomerization (brightness) value overlaid with a cell image (Fig. 6b). Because our eN&B version includes algorithms minimizing the impact of photobleaching, the method allows resolution of brightness maps during long time-lapse imaging.
In addition to the oligomerization maps, the software can be used to retrieve more quantitative plots, which we term i-mer plots (Fig. 6c). These plots display the evolution of the relative concentration of the different oligomers over time, displaying each oligomer species (up to 40-mer) as an independent curve. The value for each oligomer at a given time point results from the addition of the relative abundance of each oligomerization value (monomer, dimer and so on) from every pixel. In our experiments, the i-mer plots revealed a strikingly organized sequence of events, in which progressively larger oligomers take over the smaller ones, following a strict growth trajectory. Different ligand stimulations, such as soluble, surface immobilized or multivalent, substantially changed the trajectory of the different curves 28,60 . For example, the slope of monomer depletion may reflect the speed of the oligomerization process. This plot is therefore the best tool to quantitatively assess the clustering dynamics of a protein of interest.
The data contained in the i-mer plot can be also be presented from a population point of view. The relative abundance distribution of all oligomers at each time point is presented in plots such as the one depicted in Fig. 6d. This plot complements the i-mer plot, because the shape of each curve provides an overview of the diversity of the oligomer population present in the cell at every time point. eN&B uses a resampling method (what we call the enhancement) that procures for each time point an oligomerization distribution for every pixel, instead of the single value retrieved by the standard N&B. The data obtained using eN&B are too complicated to be represented in simple understandable plots without averaging the information. The plot in Fig. 6e gives a rough idea of the amount of information generated by the method. The eN&B data are better suited to additional mathematical or statistical analysis, rather than graphic representation. The software generates matrixes containing all numerical values for that purpose. For each sample, the relative abundance of every oligomer of every pixel in the image is included in an Excel file or a MATLAB matrix. These data are amenable to further mathematical analysis. The software allows running of the analysis in a standard N&B mode, without performing the statistical enhancement. In this case, the software will generate a single value per every pixel, corresponding to the modal value of the oligomerization distribution. Generation of standard N&B data is roughly 200 times faster than generation of eN&B data, and they are useful for exploratory analysis or qualitative observations. We demonstrate our eN&B method using transgenic cells expressing a fluorescently tagged EphB2 receptor. The cells were presented with ephrin ligands to induce receptor clustering during a 1 h time course. The analysis was performed every 5 min to provide a detailed time course of Eph clustering. The oligomerization maps show Eph aggregation across the entire cell surface in a progressive manner (Fig. 6c,d). Oligomerization runs uninterrupted during the entire time of observation. The i-mer plot shows a characteristic pattern, which is repeated across many experiments. Monomers and low-order oligomers can be seen to decay in the first 15 min after ligand addition. Thereafter, oligomers of progressively larger size increase their abundance in a strikingly coordinated pattern. An interesting feature of the EphB2 receptor is that clustering continues to occur beyond the point of monomer depletion, which suggest that oligomers condense (coalesce) into larger ones. This type of behavior might not apply to other proteins and would need to be confirmed on a case-by-case basis. The shape of the oligomer distribution evolves over time as well (Fig. 6d). At early time points, narrow distributions center around small oligomer values. Over time, the center of the distribution shifts toward larger oligomers. The width of the distribution expands quickly over time, sometimes leading to long-tailed distributions, which reflects the growing diversity of the oligomer population also over time. The fast expansion of the distribution shape correlates with oligomer condensation. When using the enhanced version of the analysis, it is advisable to analyze the same sample using standard N&B in parallel. In a qualitative way, the results should be similar, and the analysis will run faster. Plotted eN&B data, such as in Fig. 6e, can be challenging to read. For numerical analysis, the enhanced data contained in the MATLAB matrixes provides a more complete and faithful description of the oligomer population.

Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Data/code availability
The data collection for this study was done using our custom-made algorithms available at http://bioimaging.usc.edu/software.html. The data analysis for this study was done using our custommade algorithms available at http://bioimaging.usc.edu/software.html. An example dataset is available at the same link.

Statistical parameters
When statistical analyses are reported, confirm that the following items are present in the relevant location (e.g. figure legend, table legend, main text, or Methods section).

n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement An indication of whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistics including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted

Software and code
Policy information about availability of computer code

Data collection
The data collected for this study was done using our custom-made algorithms available at http://bioimaging.usc.edu

Data analysis
The data analysis for this study was done using our custom-made algorithms available at http://bioimaging.usc.edu For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers upon request. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information.

Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability The software (and an example of dataset analysed during this study), available at http://bioimaging.usc.edu extracts brightness (...)

nature research | reporting summary
April 2018 Field-specific reporting Please select the best fit for your research. If you are not sure, read the appropriate sections before making your selection.

Life sciences Behavioural & social sciences Ecological, evolutionary & environmental sciences
For a reference copy of the document with all sections, see nature.com/authors/policies/ReportingSummary-flat.pdf

Life sciences study design
All studies must disclose on these points even when the disclosure is negative.

Sample size
This paper reports a protocol on how to perform the method described in (Ojosnegros et al. PNAS 2017). The sample size was determined by technical reasons, such as cell availability (non-migrating cells, cells on focus etc.) and image acquisition times. We have analyzed more than 350 cells in total, which provide enough statistical power to detect differences between ligand stimulated and non-stimulated cells.
Data exclusions Cells that were either: too bright (saturated signal), too dim, migrating out of focus, moving inside the focal plane, pre-stimulated before the induction, where excluded form the analysis.

Replication
The conclusions presented in this paper derive from the individual analysis of more than 350 cells.

Authentication
The cells were purchased directly from distributor and no further validation was performed